SlideShare ist ein Scribd-Unternehmen logo
1 von 76
Downloaden Sie, um offline zu lesen
A Hitchhiker’s Guide to the SDDC Galaxy:
VMWORLD 2016
Prepared By: Michael Knight
09/01/2016
Contents
Introduction..................................................................................................................................... 3
History of our SDDC Journey......................................................................................................... 5
A trip through VMWORLD 2016................................................................................................. 10
General Session Day 1 – Competitive Advantage in the Multi-Cloud Era .............................. 11
General Session Day 2 – Competitive Advantage in the Multi-Cloud Era .............................. 17
Breakout Sessions Summary..................................................................................................... 27
The Software Defined Data Center (SDDC) ........................................................................ 29
vRealize Operations and vLog Insight.................................................................................. 41
vRealize Automation and vRealize Orchestrator.................................................................. 51
vSphere 6 Platform Enhancements ....................................................................................... 61
Solutions Exchange – Tuesday/Wednesday Afternoon with the Experts................................. 71
IBM....................................................................................................................................... 71
HP ......................................................................................................................................... 71
CISCO................................................................................................................................... 71
INTEL................................................................................................................................... 71
SIMPLIVITY........................................................................................................................ 71
ZERTO.................................................................................................................................. 71
TURBONOMICS ................................................................................................................. 71
VMWARE - vExperts........................................................................................................... 72
Conclusion .................................................................................................................................... 73
Introduction
To start with the title in no way is original, in fact it was inspired by a title of one of the sessions
this year. So immediately before I went to VMWORLD 2016, I was inspired by it.
This year one of my most anticipated work events has been VMWORLD 2016, I had setup for
my team several trainings, from VMware and from Cisco. But as of yet had not had an
opportunity except my constant reading and investigation of platform technology. But I have a
team of experts to rely on.
I have transitioned most of my training now to Management Training on our Web Based
Training System. Even then time is the commodity I think the Server Engineering team has the
least of.
It takes a lot to manage, and deploy platform resources and properly direct the team effort to
achieve the goals of the organic growth and project demanded resources of the company. We
manage 250+ virtualization hosts, and manage ~3600 VM workloads, and that does not include
infrastructure support for VDI.
So for me this year I felt this was the most critical training and experience I would get this year,
there was no training, or conference I would rather have had the chance to go to. I was also
extremely excited to share this experience with my practically my whole team including my
manager was the icing on the top of a perfectly made cake.
I have for the last four years with an amazing team of people worked on our digital
transformation to the Software Defined Data Center. One of the things I was really looking
forward to was that now that I was @ a leading healthcare insurance company, I only had to
focus on my goals for our platform, so would be able to really get what I wanted out of
VMWORLD.
As I flew into Las Vegas with anticipation of all of the great revelations that would happen this
year that it would be the next revolution, after all what could be grander than Software Defined
Data Center they have pioneered, so it would have to be something big.
As soon as I hit the ground I grabbed a Taxi and headed out to the Hotel to get my room, and
much more importantly make my way to the VMware WMWORLD 2016 Registration. I made it
to the Luxor, which is connected to the Mandalay Bay Hotel where the conference is being held,
and I saw blue VMWORLD 2016 Backpacks everywhere. It was obvious that there would be a
large attendance this year. I checked in and dropped my stuff off in the room, and made my way
over to the conference, with computer geeks surrounding me on all sides, once I had registered
the lady mentioned to me they were expecting upwards of 30k of us.
30,000 Geeks + VMware + Vegas = One hell of a good time!
I spent the first night with the team and we ate at a Gordon Ramsey Restaurant, everywhere you
looked you saw tech professionals who had made the migration and where in search of food and
libations. We enjoyed the dinner and discussed among the team our target of realizing fully the
value of the conference, but to make sure we come back armed to help advance Blue Shield of
California platform to new heights.
We wrapped up our kick-off dinner and all headed out to our respective destinations for the
night. I made it back to my room, and rushed to bed excited of the next day events. Visions of
Hyper-Convergence, SDDC, Private and Hybrid Clouds swirling in my brain as I drifted off.
The next morning proceeded over to the conference and got in the line for the opening general
session, and when I say line it was only sort of one confined by the walls of the hallways, 10
people across and because I was early and near the beginning going back as far as the eye could
see. The doors would open just prior to the start of the General Session ad we would all filter into
a room the size of two football fields. Music was loud and penetrating with a deep bass and
current variety of house music playing, as I made my way to the front of the center of the room.
Lucky me would get a first-hand view and would not need the monitors to see the speaker.
The hired DJ in a Sphere contraption that was half DJ stand and half drums continued to mix out
the high energy beat as the keynote started. Boom (drums), Tomorrow (Voice), Cloud….
Digital Transformation, the journey we have been on, we as a larger identity of IT. And
tomorrow was here, the Cloud Era was not a Fad, it is now the standard. Cloud is the innovator,
whether it is your own internal private cloud, or a public cloud, or the use of both in Hybrid-
Cloud infrastructures. Companies are adopting at a phenomenal rate. Lots of companies have it
as a top priority to complete their digital transformation.
This year the trend would continue with the Software Defined Data Center (SDDC) and the
leverage of those benefits, and new benefits being announced. But the new direction was that we
were all mature enough for “Hybrid-Clouds” and Cross-Cloud capabilities.
History of our SDDC Journey
I thought of when I started this opportunity with a handful of clusters and a few hundred
workloads, to the new colocations where we have driven 90% virtualization and now manage
thousands of workloads, we are certainly digitally transforming. We like many others started this
effort 3 years ago, us like many of them have made significant progress in our journey. But there
was still further to go, more to understand, faster adoption to occur and tomorrow was certainly
here. In the four years almost I have spent here, I have met and been able to work with some of
the greatest people I have had the pleasure to work with. We took a huge journey together, and
below is that story.
My history was as a Sales Engineer, Solutions Architect, a Solutions Consultant, and a
Professional Services Engineering Manager for technology integrators. What those roles had
taught me was that Architecture and Planning were essential, that always following a vendor’s
recommended reference architectures would typically cover 80% of the solution, and you would
engineer the other 20% as a unique quality of the company you were doing the work for. It also
taught me that the key to success is the human resources that were collaborating with you.
My focus over the last 20 years has been Virtualization, starting with End User Computing
(Citrix) and later VMware (vSphere and Horizon), the other areas of expertise I had gained over
time because it was necessary to bring the whole stack together, and often times as a consultant
you are running solo on a project or POC, was storage (EMC, IBM, HP, NetApp) and
networking (Cisco, and HP) and hardware I worked with all of the providers out there (HP,IBM,
DELL, Cisco). Other things I picked up was specialized tooling and scripting within my areas of
focus.
So when I landed a job with Blue Shield (Nice to move back to North Cali  ) I immediately
began assessments of those two platforms and began to evangelize the appropriate architectures
and designs to get the most out of these platforms. This has been a journey, one I am happy I
took. I was skeptical about working for a non-IT company, but was going to help to forge my
destiny here.
For VMware I turned to VMware Validated Designs, and my own expertise to begin the journey
properly. I did do some Citrix work along the way, but that is a different experience for a
different time. Below is a brief history of the 4 years we have been on this journey, we have a
long road behind us, but also a long road ahead. But we are ready and well-armed to accomplish
this task. Forgive me if I missed any details, this is a summation of my reflections on my trip to
Las Vegas.
I truly appreciate my team members, past and present for without them we would not be where
we are at today. Thanks for all of the hours, all of the commitment, all of the intellectual
contributions, and most of all thanks for the team work!
The journey for us started in our legacy data centers back in 2012, under different leadership.
When I arrived I knew we had some distance to go and my work was cut out for me. We were
approximately 20% of all workloads were virtualized and 80% traditional compute. I spoke with
management and they definitely wanted to get it all virtualized and really begin to enjoy all of
the benefits of Virtualization. As a first step we did an assessment of the virtual platform we had,
we hired a Capacity Manager at my recommendation, and we began in earnest P2V projects to
move our underutilized traditional server workloads into our new virtualized environment. I was
also able to convince my leadership that we needed to have a person to do capacity management
so we were not just taking a shotgun approach to our infrastructure. We also started directing all
projects that needed compute to the VMWARE platform. We did some immediate refresh of
cluster hardware, ordered more nodes, and consolidated and removed redundant cluster sprawl.
The same year we also made a huge concentrated effort at getting Tivoli tooling and SNMP
monitoring for the VMware platform, but unfortunately you don’t win them all. By the end of
2012 we had refreshed, standardized, and grew the virtual platform, we had a virtualization ratio
of nearly 60%! The platform itself went from a handful (30 Hosts) to about 100 hosts. We had
approximately a 10:1 virtualization ratio at this point.
We re-organized in 2013 and I was moved under IT-Infrastructure Platform Engineering and
converted to a FTE from a contractor. I immediately began working with my manager at the time
to build a Virtualization practice and to move or hire good people to be a part of it. The work to
be done would be much more than one person could accomplish. I was lucky to have a manager
that empowered me, and told me to build the platform that I would be proud of. So with his
blessing, and some great people we set forth to accomplish this multi-year transformation.
We continued to virtualize and support new compute requirements on our virtual platform.
Capacity was still being done manually but it was right and we were able to track the
reservations and had a good expectation from the platform on what we could deploy and what
that would look like. We had gotten some additional hardware in 2013 for the platform and we
built out our first true purpose built clusters the PERF cluster with the intention of managing
only DB workloads here and through DRS rules align with our licensing to ensure we had
capacity and compliance. We also continued to move workloads into what we felt were more
appropriate clusters, and renamed the clusters to represent purpose better; APP, CORE, PERF,
CLOUD (Cloud Director) and EDGE. By mid-2013 we had been told we would be migrating out
of our LEGACY Data Centers and to new Colocation Data Centers we would build out. We
immediately began planning what that would look like and that we wanted to leverage as much
of the Software Driven Data Center reference designs as possible. By the end of 2013 we had
ordered the equipment for our new Data Centers and finished remediating the Legacy Data
Centers to ensure as seamless a migration as possible. We planned out all of the infrastructure for
the Colocation Data Centers in the months to come. We finished the year at approximately 70%
virtualized.
The focus in 2014 was a balance of building out the new colocation data centers, but maintaining
business as usual as well. We brought up Sacramento first and as soon as we had it fully
operational it was completely tooled, and SRM/VR ready for migration. Within the first couple
of months of operations we started getting our “New” workloads in our first colo. These were not
legacy migrations but new server requirements for new projects coming in the pipeline. We
stopped directing build activity in the legacy data centers because we wanted to limit the amount
of work we would need to do to migrate them. We implemented vCOps (vCenter Ops) as our
standard tool as it allowed visibility from the DC to the VM, and had the ability to do analysis,
reporting, and capacity modeling. The best part it was a native tool to the platform. I also
implemented a weekly run of three scripts that I had used all of the time at client sites when I
was consulting. Drove a weekly meeting with VMware Account team, which the
partner/customer relationship was sorely suffering, to ensure that all the steps we took had
vendor involvement and any potential recommendations. For the most part it was just validations
that we had it all aligned right, and we were using a supported methodology for deployment that
would make it easy for us to service our clients, but support to service us. It was the calm before
the storm for the migration and I used that time to make sure my team had all of the resources
they could need to be successful and had input into what we were building so they had an
investment in its success. The team had grown in size to 10 including myself. We finalized the
building out of the colocation facilities in 2014, we even had migrated some of the critical
application infrastructures. Other IT teams were working on Pure Application systems that were
being deployed to Legacy campuses. Eventually they too would migrate to the colocation
facilities. I had some talks on automation, and that the future of the DC was going to be
automated, I shared my vision of a self-healing DC that leveraged vCO (vCenter Orchestrator)
and vCenter triggered alarms or warnings. We signed up as a team (3 FTEs) to go to
VMWORLD together, very exciting, we wanted to hear firsthand about others, and their
journeys to the SDDC. We completed the year 80% Virtualized.
The following year 2015 started in a fury, we all had a goal and it had to be done…Migrate
everything into our new Enterprise Data Centers. I deployed a multi-site SRM strategy that
allowed each site to migrate through SRM to any other site, I also deployed VR (Virtual
Replication) and eliminated all array based replication after getting our QoS set correctly. I
recreated all of the existing plans we had, and now the VM were replicating regardless of
datastore location. We deployed our first vSan Clusters replacing each of the branch sites with
them eliminating the traditional compute/storage stack and doing it for a lower TCO. We also
had VMware come out and do a Health Assessment with Hardening Guide since we wanted to
ensure our new Data Center Strategies were right in line with industry standard but more than
that were with the vendor validated design. We did have some mild remediation to be done, but
all in all we were blessed by VMware and told that we had not only a very robust and resilient
architecture, but we had an incredible team of experts and the engagement had been a beneficial
one for both. We later in the year did a vROps (vRealize Operations) project since vCOps had
been depreciated and was being replaced by the former. That went quite well and we fully
deployed to all vcenters and were collecting metrics, and had migrated our alerting system to it
as well. We had several custom dashboards created, and we plugged into our EMC Arrays so we
would be able to traverse the whole stack minus, H/W and Network. We had all components
installed including Hyperion which would allow us to tie into physical infrastructures as well. I
worked on getting all of our licensing updated that year to vCloud Advanced Licenses so we
would have the ability to fully leverage the tooling on the platform and have visibility and
actionable data. We battled to the end on the Legacy Migration, migrating the last 500 workloads
in the last three months of the year! Very impressed with the team, very satisfied with the
platform and tooling we had deployed. I had done some other VM Work in the at our external
data center that year decommissioning the legacy virtual environment and building a new virtual
environment to meet all of the demand, and to also virtualize our Citrix FI servers. Sent a few to
VMWORLD, but missed it myself this year. But at the end of the year we had done it, done all
of it, the migration, got all of our Licenses requested, the heightened level of VMware Support,
got our native tooling fully deployed, and got the hardware we asked for. Real satisfaction at a
job well done, looking forward to the next year and the migration of the external data center. We
had worked too many hours to count, had poured our talent, and our confidence into the two new
data centers and they worked. The report I got from VMWORLD 2015 it was more of the same
as the prior year, working on the Digital Transformation, not un-exciting, but a validation we still
firmly had our feet on the right path. The resources I sent appreciated it, and I got updates online
by catching the sessions, or in our bi-weekly meeting with VMware and our new Business
Critical Support (BCS) relationship with them.
This year, 2016 started a lot calmer than the prior two years. In fact I remember thinking in
January near the end that it was too calm. But that was ok to, I wanted to focus on Standards,
Processes, Documentation Updates, Platform Advancement, a better understanding and
relationship with our tooling, and MOST importantly integration to the Network Operations
Center like any other Enterprise Platform. Waste reclamation was also a hot item for me, I
wanted to make sure we started to eliminate waste in our environments and evangelize the
impact to both the workload oversized, but the whole of the workloads that are in a shared cluster
or on a host. I wanted to take a deeper look into advanced analytics, and automated workload
placement. I had meetings with both Cirba and VMTurbo now Turbonomics about automatic
workload placement and automatic resource balancing. Eliminating the human factor in trying to
fully balance between clusters properly, the ability to fill the gap that DRS has which is it only
works in a cluster. About this time Derek gave his notice, which admittedly was of no
consequence to the Virt Team, other than we would have a new manager. I had the team and
myself continue our activities to build out the needed capacity according to our current roadmap
for Cluster configuration. We continued to refine the metrics in vRops and had a waste
reclamation meeting with the design engineering, before we knew he was to be our manager, and
shared candidly my concerns for the environment, and that we needed to make sure all players
understood design for the virtual platform and were not just throwing away resources and
impacted overall platform performance.
Then the re-org happened, and to our great benefit we now, Design Engineering, Server
Engineering, and Storage Engineering would be under one common management and be a
collaborative unit. This could only be good for the platforms, the teams, me. I now have new
platforms to run as well as VMware, now AIX, Hardware, Pure Application. The key will be
making sure we make the right strategic and tactical decisions were being made and to ensure we
have standards, process, design, and operational aspects covered. We have a large body of
information already as we used SharePoint since 2013 to track the platform, issues, and tasks.
This also included our Designs, As-Builts, and reporting information.
We continued our direction on the VMware platform, undeterred from our goals of supporting
the migration and advancing our platforms. We are winding down with our migration, we are
looking forward at new potential compute requirements, and some much needed time to focus on
each of our platforms and make sure that we set similar standards and enable the same level of
insight for each. So I have started to integrate all of platforms we govern, as well as tracking all
OS Standards.
We finished our study with Turbonomics and got some great collateral that really shows a value
specific to us (I was super jazzed until they announced workload placement with vROps) that
could really make a real difference of the platform performance and allocation and de-allocation
of resources automatically.
Embracing my new role, as Team Lead and trying to remain more managerial and less technical.
But it is hard to keep the engineer out. I appreciated the coaching from my management team,
and feel very comfortable that we share similar goal sets and objectives.
So my objectives this year at VMWORLD were two fold, Managerial and Technical Strategy. I
wanted to focus on mainstream operations of the SDDC and really focus on the business case for
NSX, and applying the other principles from our tooling to enhance our services to our
customers and drive service excellence.
I want to express my sincere appreciation for the opportunity this year to come, as well as the
opportunities to be part of a grander vision!
A trip through VMWORLD 2016
As I was sitting in the high energy General Session room, I could not help reflecting on, what in
our own organization we have changed the paradigm from deployment of application
infrastructures being measured in months, even sometimes years, to days and weeks. Automation
will take us through the next steps that will eliminate days of deployment to hours, and eliminate
the human factor of Day 0 to Day 2 operationalization through automation and intelligent
provisioning of critical application infrastructures and shorten delivery times of infrastructure to
seconds and minutes.
We had followed the reference designs and have seen a great amount of features within our
platform.
A VMware Validated Design is a prescriptive blueprint with comprehensive deployment and
operational processes mapped out. The advantages to using this is that it is fully vetted by
VMware vExperts and Architects working at VMware. The following are the clear reasons why
you would chose this methodology:
1. Standardized Designs
2. Proven and Robust
3. Broad Use Cases
4. Comprehensive Documentation
5. Vendor Certified
Would we extend this to embrace services from the Public Cloud, actually run some of our
infrastructures within the public cloud, and become one of those that truly adopt Hybrid-Cloud?
The opportunity made me excited to be here, to be at VMWORLD, to be where tomorrow was
possibilities and opportunities, to be in tomorrow…
This was starting to feel like we have achieved pace in this marathon and the realization we need
to complete the ecosystem, and our SDDC (Private Cloud) would be the success they reference,
a best of breed success!
Then the moment we all had been waiting for, Pat Gelsinger, CEO of VMware came on the
stage.
General Session Day 1 – Competitive Advantage in the Multi-Cloud Era
Mr. Gelsinger takes to the stage at a brisk walk despite the foot brace on his right leg. He stops
center stage and turns towards the audience and says
“What a provocative way to start! Which way will you face? The obvious answer is that we will
face forward together.”
The highlights of the morning session, was that all business is now Digital Business. That there
are no traditional businesses left. That all businesses needed data analytics to continue to
transform to digital business and innovate in that space.
It turns out that only 20% of companies are leaders in the digital transformation and the other 8
out of 10 are struggling to achieve this.
The digital age is well under way, and it is as transformative as the industrial age. It can be just
as disruptive if we don’t plan for it, integrate the components, and truly embrace the digital
transformation.
Pat went on to give a timeline of the cloud, and what the adoption rates looked like in the past to
the future:
2006: The Cloud Begins, digital transformation begins.
2006: 2% Public Cloud (Salesforce); 0% Private Cloud; and 98% Traditional (29 Million
workloads)
2011: 7% Public Cloud; 6% Private Cloud; 87% Traditional IT (80 Million)
2016: 15% Public Cloud; 12% Private Cloud; 73% Traditional IT (160 Million Workloads)
Agility – Flexibility – Scalability- Resilience
2021: 50% Mark of Cloud
2021: 30% Public Cloud; 20% Private Cloud; 50% Traditional IT (255 Million Workloads)
2030: More than 50% in the Cloud
2030: 52% Public Cloud; 29% Private Cloud; 19% Traditional IT (596 million workloads)
Astounding, this of course is a conglomeration of all type of devices, from Mobile to IOT
consuming workloads. The point cloud is not a maybe, it’s a definite and this was decided by
predictive analytics. Of course there will be the hold backs, even today some of them don’t want
or need their requirements changed, they have long term client lists, and retirement is not that far
away. Believe it or not in IT I worked for a boss who would always ask for printouts, so he could
by hand analyze them.
But the need to analyze never stays still, it is essential to successfully run a platform and avoid
90% or more of the issues, then you have to have sound analytics, and now tactical deployment
of actions triggered by those analytics. It is the visibility, and it is a core engine in the fabric of
the Software Defined Data Center. One that maybe does not get enough attention, or credit.
Then Pat went into the trends of hosting vs. on premise deployments:
2016: Hosting is a 60 billion dollar business with 8.2 Billion devices getting access to services.
2021: Hosting is now a 110 billion dollar business 8.7 Billion devices consuming services
Again he pointed out these are active devices and not sensors or other measurement devices. But
as cloud takes root IT becomes more cost effective and services become more accessible, placing
a heavy onus of the designs of these solutions.
And because we all like top ten list, Pat had one for us. Which Vertical Industry has embraced
cloud most aggressively?
#10 – Construction
#09 – Professional Services
#08 – Securities and Investments
#07 – Insurance
#06 – Transportation
#05 – Manufacturing
#04 – Banking
#03 – Resources
#02 – Communications
#01 – Technology Vendors
So insurance is not so bad, we made the list at #7. Compliance presents the largest obstacles, but
there some real opportunities for trusted vendor’s like vCloud Air or IBM Cloud to assist us.
We are embracing new ways of working, changing the way we work. Changing the way we
think.
It is really all about freedom vs. control he went on to say, and needing freedom with control
because in the END no matter what IT owns security.
He stated that in 2016 80% of compute is virtualized industry wide, impressive movement. And
that a large amount of that was virtualized on VMware, and that we need to complete our journey
to the full SDDC. With NSX and Hyper-Converged Building Blocks. He mention only then
would we be ready for hybrid clouds, but said not to worry they had the solution for the point we
are mature or risky enough to want to use Public cloud infrastructure. This was the gauntlet
thrown down, that we should all be cross-cloud connected.
He played a quote from VMWORLD 2014 where Raghu Raghuram, on of VMWare’s Chief
Technologists said “Increasingly, all infrastructure components we know of developed and
deployed in software. Even more importantly the control of this data center is done entirely by
software. The data center is on its way to becoming programmable and automated. We call this
the Software Driven Data Center (SDDC)”
At its core, innovation is the key, an awareness of what the SDDC truly is, and what can be
plugged in, and holistically how did that look. Hundreds of vendors are developing every day
solutions for the SDDC. Intel and McAffee are well on their way in their partnership to be the
platform security driven defacto to name one, leveraging the opportunities brought to us with
NSX.
Building blocks standardization was key to the private cloud infrastructures, highly converged
and contained units of:
Management Automation
Compute Storage Network
He mentioned VMware’ commitment to helping the customer make this transition, stating
VMWare products allows the agility to achieve a lot of scale very quickly.
He revealed Cloud Foundation (Data Center Automation – Just add water, and SDDC Manager
so you could push down your security and ultimately workloads out to some cloud for some
reason. He revealed the real VMware Cloud Foundation was the SDDC and use of their vCloud
Air or IBM Public clouds. That the new product SDDC Manager would be able to allow you to
evaluate many metrics such as latency cost, etc... To make choices in where to deploy or
duplicate a workload.
Some of the features were interesting as you could set the policies and to what level you have to
be able to automate and control the Data Center on premise or off. And overall continues to
support VMware’s Validated Software Driven Data Center Design.
The VMware Cloud Foundation and SDDC really brings together several things into a recipe for
success:
- vSphere 6 - vRealize Orchestration
- vCenter 6 - vSAN – Software Defined Storage
- NSX – Network Virtualization - SDDC Manager
- vRealize Operations - Private Cloud
- vRealize Automation - Public Clouds (vCloud, IBM, and
other API connectable Clouds)
- Security Enhancement and Micro
Segmentation of workloads
- 3rd party extensibility plugins and
products for even more service
(Security Infrastructure, Load
Balancing, Etc…)
I certainly see the value though in the time and effort in putting together solid enterprise
solutions that leverage the whole stack in software. Security is they true enabler of cloud
adoption. There are 100s of appliances that can plug into this and the more we innovate the more
we can do.
The VMware Cloud Foundation automates all aspects of the infrastructure deployment lifecycle.
- Makes Private Cloud Easy
- Enables EASY adoption of Public Cloud
Coupled with SDDC Manager you are able to have fully cross-cloud infrastructure capability
without compromising expectations of secure and compliant environment.
- Solves Paradigm of Freedom vs. Control to Freedom with Control.
- Allows us to extend our SDDC to the Public Cloud
- Allows us to continue to innovate our business strategies and not be limited to what is
available.
- When going to either vCloud Air or IBM Clouds you can deploy your SDDC in hours
that entirely meets your requirements, security and compliance. Making them ideal
partners for Healthcare.
While I was listening I traced out what I saw our logical design looking like, I did not add
components for the SDDC Manager as we were still unclear what that is, my assumption it
would be an appliance and would work great in our primarily appliance heavy VMWare
platform. I also did not depict Public Cloud usage as I think we are early in that process to
visualize it too much. But it is an area we need more info on, specifically from VMWare and
IBM.
After listening to Pat I realized that we just needed to add their remaining component, SDDC
Manager which would extend our DC into two of our current vendor clouds if needed. Both of
which indicated they would work with using them and through our compliance issues.
I also realized our agility to stay in pace with the industry and our competition to our ability to
rapidly provision services to both our internal and external clients seamlessly, quickly, and
within our requirements.
The nice thing is that these were both vendors (VMware/IBM) we have deep and long-standing
relationships with us, and both value our partnership and both are equally happy to help us along
the way to understand any complexity, and help to invest in our success.
Several bullets of data were shared:
- Is IT Department still needed?
o Innovations
o Control
o Security
o Compliance
o Legacy Support
o Cloud support
- There is always a cost, whether private or public and can easily analyze this in SDDC
Manager by inputting the costs of our private cluster deployments, then we could see the
real ROI achievable.
- Security is key to Cloud Adoption
- The new way of doing business is changing, the way applications are delivered is now
going to be Containers, specialized VMs for Application delivery.
- Hybrid Cloud is the solution for new IT
- Cross-Cloud Services and Actions
o Manage
o Secure
o Deploy
- The SDDC can support any application, on any cloud, to any device.
- VMWare Cloud Foundation is a fully integrated and ready SDDC
- NSX is a fundamental technology to support all of the security requirements in both the
private and public cloud infrastructure
o Allows for application of security policies to our workloads whether on premise
or public cloud.
o Same NSX technology for both. Setup unified security templates to be applied.
o NSX Features:
 Policy
 Firewall
 Encryption
 Routing
 Switching
 DHCP
 Micros-Segmentation for VM security
 East-West Protection
General session for the first day had several visitors on stage from the CTO of Johnson and
Johnson, VP of IBM to Michael Dell, CEO Dell, Inc.
Dell’s message is that they were very excited to be a part of such a vibrant ecosystem and saw
the partnership and direction of VMWARE right on track, and most importantly will still be
delivered from VMware.
The quotes of previous years ran thru my mind, and how like prophesy indeed the Data Center
and Industry has evolved down those lines.
Good first day General Session, stoked about the day, tomorrow, where we go next!!!
Now, back on earth, on to my normal breakouts and the simpler questions like tell me all of the
new features of vROps, and the Log Insight integration now that is part of my license. Today
would like to see progress towards the ability to analyze, collect and report on both unstructured
and structured data in vSphere platform. Tell me about the fling or new way to move my Wintel
vCenter to appliance and up to 6.0. More mundane, but the overall vision of the morning is re-
assuring we are still on track.
General Session Day 2 – Competitive Advantage in the Multi-Cloud Era
Admittedly I watched this on Tuesday evening in my room. So I watched this in my room on
youTube but wanted to still provide my summary.
Day 2 General Session starts with Sanjay Poonen – EUC VP and he begins the session with a
discussion on what VMWare is doing in this space with Horizon and AirWatch. The world is
going digital, and we are in 4th
revolution for humanity, the digital age. He states Any Device,
Any Application, Any Cloud is the focus of VMWare. (Sound familiar – Citrix)
He was a charismatic speaker and the information on Workspace One was very compelling. As
he began talking about everything about their product is Cloud driven. That they valued identity
and access to applications from any device and location, the mobile experience. He started
talking about the integration with their Mobility Experience Management product AirWatch, and
that together with Horizon this made up Workspace One.
That they have been very successful at selling this as a service as well through Horizon Air.
Offering the remote experience that the new workforce expects. That management of devices and
security is managed through AirWatch. Conditional access features protecting company data
from being downloaded to un-authorized end points.
He introduced Stephanie Buscani Executive VP from Salesforce at this point and she came on
stage and told of their experience with Horizon and AirWatch, the Workspace One experience.
She of course was very positive and told of several key points they relied on the service.
Stephanie went on to talk about not being able to deliver all of their products if it was not for this
trusted methodology delivered through VMWare. Again NSX was a critical component for VM
micro-segmentation and securing of the infrastructure underlying.
She described how Salesforce has a great deal of success in their deployment:
1. Robust, Open, and trusted methodology that is how they deliver all of their applications
2. Salesforce is in a Partnership with VMWare and that is due to a deep trust in the
platform.
3. SSO and Security are done entirely through Airwatch
4. Overall simplicity of Security Templates and application of them
VMware mentioned they have two options as a service, through Horizon Air and also through
the IBM Cloud Partnership. He talked briefly about exciting opportunities to use app volumes to
deliver applications, but be able to maintain lifecycle of the single volume. He mentioned that
NSX is a critical piece of the puzzle due to micro-segmentation.
He also stated that Salesforce was just one of many Fortune 500 Companies that trust them for
their services. He stated that the Horizon team is focused on TCO of virtual desktops and are
targeting to get the costs of a VDI to under $5900 per year, a clear improved ROI over other
competitors.
At this point he invited the CEO of TrustPoint one of VMWare’s recent acquirements to come
out and discuss end point security. One of the unique differentiators of TrustPoint is that t allows
for plain English queries, and searches through the global network to provide back results. It is
powered by Tanium which is a leader in the Endpoint Security Space. It was an interesting demo
and they proved out they could do a number of searches and remediation right through
TrustPoint. The main objective was to discuss its extensibility to AirWatch and the power
together that they bring to the VDI space.
They wrapped up the EUC discussion with the point that Workspace One allows for all of the
management and visibility through a single interface, that users seamlessly are delivered
applications securely, to any device, and anywhere.
The session was then turned over to the CTO of VMWare, Ray O’Farrell. Ray came out and
immediately started to talk about what they hope their relationship will be with their customers.
And that by doing this they would help their customers understand how Cloud enables the goals
of the enterprise. Consumption can be as great as innovation of your own requirements. Bottom
line they would like to continue to be the trusted advisor that we felt comfortable coming to
better understand the landscape of the new IT and how to overcome hurdles in our digital
transformation.
The key challenges are:
- Enabling the Organization to make the digital transformation through understanding of
the new technologies.
- Security for the new Enterprise, understanding of the new security challenges.
- Enabling deployment or integration of Cloud Applications
- Understanding Containers and how they change the Application delivery model.
At this point the CTO of Cloud Platform for VMWare took the stage to talk more about what
VMWare is doing to make Cloud easy. He talked about the evolution of the cloud and about the
use of Containers becoming more prolific. He also talked about the worries of the enterprise in
supporting new models of business, and IT’s perspective on complexity to implement:
- Networking
- Monitoring
- Accounting
- Storage
- Security
- Portability
- Repeatable Deployments
- Incident/Problem Management
- Availability
- Backup
- Disaster Recovery
- Business Continuity
He then unveiled VM Integrated Containers as well as VMWare’s unique Container platform
Photon. He said that both were equally amazing in the breadth and scope of what they can do,
but wanted to focus on VM-Integrated Containers because of the capabilities, and the simplicity
for IT to adopt. He flashed back to the VMWare and you slide previously shown, and says the
balancing act is making the balance between scalability, Security and Compute, to listen and
understand each unique aspect of the business. Helping the Devs and IT to see that there really
was no complexity in making this work within the right framework, and VMWare Integrated
Containers makes that happen:
This technically looks like this in the SDDC as shown in day 1 general session:
With the SDDC being portable it means you can utilize hybrid cloud to increase response times
geographically, or to reduce costs between the Private and Public Cloud pricing. Ultimately it
detaches the application from an OS and using this level of simplicity reduces the surface for
issues to occur. It does indeed require a shift in how we as IT look at delivery of applications as
well as development of future versions of traditional in house applications to a more web-centric
infrastructure.
He went on to list the features of Integrated Containers:
He wrapped up with the benefits that were tangible for IT:
- Containers are just another VM from the Infrastructure Management perspective.
- NSX provides heightened levels of security for these VMs
- Container Management is completely integrated within the vSphere Web Client, the
default administration site fir vSphere
- Easy to use with vRealize Automation, and with SDDC portability move to where they
are needed
- Automatically integrates to vRealize Operations. Full monitoring of the VM and
connected infrastructure in vRealize Operations
- vRealize Automation is ideal to provision and manage deployment of Containers
- Container Management is also embedded in vRealize Automation Console
- Adds a new layer of abstraction from the OS
- Containers are extensible, and 3rd
party product can easily enhance or be added to
Containers
And lastly before turning to discuss NSX he revealed Photon, with the disclaimer it was not yet
fully mature.
This is currently under development but another innovation for the SDDC. The topic turned to
NSX, Network Virtualization.
- The biggest transformation to the traditional network stack
- Academic envisioned in 2013
- 1700+ Customers have deployed in 2016
Much like compute and storage, it is critical that the network also makes a paradigm shift from
the hardware stack to a software driven service which enables advanced feature sets, capabilities,
and services including east-west security, and micro segmentation of the VM ensuring all
exposures are remediated and only the purpose of that VM is available to support a service.
NSX is really broken up into three focus areas that drive the whole of the technology. Security,
Automation and Application Consistent Security. NSX provides strategic security and network
services of every level of the stack today, and its overall ability to be entirely integrated into the
automation of the SDDC makes it a revolutionary step forward in both the security and network
spaces.
The average cost of a security breach is +4 million dollars, and immediate impact to Corporate
Reputation. The speed of the traditional network components and configurations take days
sometimes even months. NSX allows for workloads to be deployed at VM Speed which is in
minutes or hours depending on complexity, fully automated and policy driven. With NSX your
network and Security of those critical networks are always on.
NSX allows for true Application Portability between Data Centers and even Public Cloud
infrastructures. NSX is a strategic partner at every layer of the OSI model, introducing new
heights of security capability, and network management and provisioning that is unparalleled by
its Traditional counterpart.
“NSX gives you a secure agile network which is key to the critical operations of your company.”
Where do we start to make this happen?
Assess Plan Enforce Monitor
Need to do a NSX
Pre-Assessment
Report
- There is a free
tool called
vRealize
Network
Insight that
needs to be
run to
ascertain
current state
of the
network.
NSX needs to be
installed at this point
so that deeper insight
can be gained.
- vRealize
Network
Insight tool
can provide
what rules
need to be
deployed to
secure the
environment
Once the plan is
developed it can be
moved over for
enforcement, and this
can be deployed all at
once or in phases.
Need to do the NSX
assessment again to
establish that all is
configured and
enforced as expected.
- VRealize
Network tool
is meant to be
used ona
regular basis
to review
security of the
networks we
have
workloads
deployed.
- NSX is transformative and foundational for your network and the security of your
network.
- NSX analyzes the workloads to determine security for each workload.
- NSX provides end to end security of the workloads through micro segmentation, those
settings become intrinsic to the workload.
- NSX creates workload security that is appropriate, but is also portable and moves with
the VM.
He wrapped up the discussion on NSX stating that there is no better time than now to embrace it,
it is a key component of the SDDC. He passed on the microphone to the VP of vSan at this point.
The discussion started with most of the new Hyper Converged Infrastructure (HCI) are being
powered by vSan. That customers have had great success with vSan in their environments
whether homegrown solutions or packaged solutions like vxRail.
vxRail makes an ideal building block she went on to say considering what you get within a 2U
form factor, 4 compute nodes, integrated storage, and network. You can use up to 64 nodes per
cluster under 6.0 and can purchase in the smaller building blocks or upgrade to vxRack and do it
at a larger scale.
While traditional storage solutions would still have a place in the data center the storage that
powered VM infrastructures would likely be within the HCI due to the reduction of costs and the
higher end performance of DAS versus SAN or NAS connected storage.
One of the keys to building the Data Centers of tomorrow would be the use of HCI, and
standardized building blocks for build, and capacity.
- vSan is optimized for flash, in fact on all flash deployments you get Deduplication and
compression features that shrinks the overall footprint of the data positively impacting
available capacity. Also being directly attached and flash eliminates latency incurred at
the fabric, and storage performance is not the bottleneck.
- Redundancy and resilience is as good as traditional arrays or better.
- vSan is completely integrated in the vSphere Web Client as well as vROps allowing for
in depth monitoring, and analysis of trends or impacts.
- vSan runs some of the most critical workloads from some of the best known application
vendors for a lot of Fortune 500 companies. Application workloads like Oracle, SQL,
SAP, Exchange, Etc…
- vSan is widely adopted in VDI infrastructures
- vSan streamlines deployments and is easily expandable by adding additional compute or
storage nodes.
- HCI = Hyper Converged Infrastructure
She showed a slide with a cloud provider who uses vSan extensively:
IBM Cloud has also adopted vSan based HCI, and have stated that they are able to provision
storage at 3x the pace, but also see overall improved performance of VDI to as much as 10 times
faster in some use cases.
She went on to discuss the use Virtual Volumes (vvols) which makes storage very VM-centric
and more of a 1:1 relationship. Instead of allocating datastores which will always have some
level of waste, you are able to provision storage directly to a VM that grows and shrinks as the
VM requires within the bounds of the vSan capacity.
vSan is the default storage type in the Integrated Containers reference architecture, and it is the
storage behind the VMWare Photon Container platform. There is a preference here for the
heightened performance of the vSan that enables trusted performance of the containers delivering
application services.
She said there was more on the horizon for vSan in the near future, and she listed some of the
features they are looking to release in the next version:
- Policy based management, introducing intelligent metrics that will change how we can
use and prioritize actionable data.
- Introducing End-To-End encryption.
Lastly she mentioned it is fully integrated with vRealize Automation and that all configuration
for vSan is done through the vSphere Web Client.
That wrapped up the general sessions for VMWORLD 2016, I felt both were of tremendous
value and provided great insight into the now and the future to come.
Breakout Sessions Summary
The breakout sessions are mostly technical in nature and they provide a micro level of
information in most cases. I have listed the Breakout sessions I took and I will summarize the
most exciting points in those sessions as potential best practices, feature enhancement, and
platform evolution.
So for the most part my classes were about SDDC, vRealize Operations Manager, and deep dives
on particular components of the stack.
It was always my intention when deploying the new colocation facilities that we would drive the
SDDC and all of the components so we could realize the promise of the holistically designed
Software Driven Data Center.
The following are the break-out sessions I attended:
Monday
1. SDDC and Hyper-Converged Have Arrived, Get Onboard! (SDDC9035-S)
2. Hyper-Converged Infrastructure at Scale vxRack 1000 SDDC (SDDC9023) – Toured the
Solutions Exchange, met with EMC, and discussed vxRail and vxRack, also went by
Simplivity and Nutanix to review their offerings.
3. How to Manage Health, Performance, and Capacity of Your Virtualized Data Center
Using vSphere with Operations Management [INF8275R]
4. The KISS of vRealize Operations! [MGT7718]
5. VMware Validated Design for SDDC – Operations Architecture Technical Deepdive
[SDDC8423]
Tuesday
1. vSphere DRS and vRealize Operations: Better Together [INF7825]
2. Deep Dive into Deploying the vRealize Cloud Management Platform the VMware
Validated Designs Way! [SDDC8946]
3. Getting the Most out of vMotion: Architecture, Features, Performance and Debugging
[INF8644] – Skipped for session with vExpert on Simplified Data Center Management
with vROps and vRO
4. VMware Cloud Foundation Backup and Disaster Recovery [SDDC9181] – Skipped for
Veeam discussion at Solutions exchange.
Wednesday
1. Extreme Performance Series: DRS Performance Deep Dive—Bigger Clusters, Better
Balancing, Lower Overhead [INF8959]
2. Extreme Performance Series: Virtualized Big Data Performance Best Practices
[VIRT8738] – Skipped for Solutions Exchange vExpert Session on vRops
3. SRM with vRA 7: Automating Disaster Recovery Operations [STO8344] - Skipped for
Solutions Exchange vExpert Session on vRops
4. An Architect's Guide to Designing Risk: The VCDX Methodology [INF9048] - Skipped
for Solutions Exchange Intel session on their Security Controller and NSX
5. The vCenter Server and Platform Services Controller Guide to the Galaxy [INF8225]
6. An Industry Roadmap: From storage to data management [STO7903] – Jason and I went
around the Solutions Exchange, met specifically with Zerto and Simplivity, but spent
time with several other vendors as well.
Virtual
1. Manager’s Guide to the SDDC
2. Hyper Convergence in Healthcare; The key to doing more with less.
3. Digital Transformation – Technology and Business Futures, a CTO’s perspective.
4. An IT Architect's Guide to the Software-Defined Data Center
I immersed myself into many aspects of the Data Center, I even added some Virtual Sessions
when I got home, since there were conflicting sessions I could not be at, and it was very cool that
this year they made 80% of everything online immediately at www.vmworld.com . If you are a
prior Alumni, you can log in and look at any of the years previous including the current
VMWORLD 2016 Breakout Sessions. So I of course encourage you to explore that, and if you
need any assistance getting access please let me know and I will be happy to help you.
Network Virtualization (NSX) was something that weighed heavy on my mind the whole
conference, our inability to overcome the political issues to deploy such a STRONG component
with so much CAPABILITY. It is worth mentioning that it certainly does not shorten the task list
of the Network Administrators as we need to have a robust network maintained to support it, and
it is also not a threat as we have mentioned every time we would like to work with them to
implement and set them up a network administrator role in VMware that would allow them to
continue to manage the network, both physical and virtual.
There was so much opportunity that we are missing out on, everywhere I went I heard about
what we could be doing with NSX, especially when I visited Intel’s booth this year, they showed
me the ecosystem they are recommending for use on NSX…
Primarily I will do this section in Bullet point format and will scan in and include diagrams from
my notes that I think have a value. I will not be covering each session, as some of them I feel had
information that would be redundant to others. So instead I decided to summarize to each topic I
want to cover. The Software Driven or Software Defined Data Center (SDDC), vRealize
Operations (vRops), and finally vSphere 6. I will likely cover vSphere 6 very lightly since it is
not really fundamentally changed in how things work, so will probably just highlight some of the
new features. I will also not spend very much time on NSX since I believe I covered it in some
detail in the general sessions and some of the comments I have made about my disappointment
of it missing from our stack.
The Software Defined Data Center (SDDC)
“By the end of 2016, every relevant IT organization will have standardized on a Software
Defined Data Center approach to IT. The key to creating an agile enterprise. VMware’s task is to
help our customers transform to this new model as fast as possible.” –Pat Gelsinger, CEO of
VMware, December 2014
“Wake up as a software company” – GE
That is because we are all now software companies, we rely on our infrastructures, our
applications, and our delivery of digital services to our user, and customers. No longer can we
ignore that we have a HUGE investment in software, in fact even our Data Centers are a
conglomeration of software, at almost every level, the Software Driven Data Center (SDDC).
New IT has cross functional capabilities enabled by the SDDC. The ways we are enabled are:
1. A new level of agility is possible.
2. Cloud and As-A-Service are enablers of the modern digital age companies
3. Deployment times have been decreased from days to hours and minutes
4. We have Application Agility through SDDC portability to any cloud, private or public
5. New building blocks enabled with Hyper-Converged Infrastructures (HCI)
We have done this with the adoption of the Software Driven Data Center or SDDC. So what is
the SDDC?
The SDDC is highly automated, easily managed platform a platform that embraces all
applications and delivers them to anywhere. A SDDC is just like it sounds, the Software Version
of the Physical Counterpart. The Physical counterpart is decoupled from the SDDC to allow the
whole of the data center to be portable.
- All infrastructure virtualized
- Automated by software
- IT delivered as a service
- IT perceived as an enabler
- IT is able to extend the SDDC to the Cloud or other Data Center
- Unparalleled Analytics
- Convert from on premise to a partner compliant cloud automatically once executed
The Software Defined Data Center (SDDC) has taken us to new heights of capabilities enabling
all aspects to have agility, resiliency, portability, and flexible elasticity. Components such as
Virtual SAN (vSan) and Network Virtualization (NSX) are bridging the data centers and clouds
which in turn stretches the compute capabilities to all of them.
It is important to understand what makes up the SDDC, as it has many components, that are all
highly integrated and integral to the full potential of the SDDC, I included a couple of key
diagrams depicting SDDC 3.0. The SDDC is a VMware Validated Design, which means it has
been fully vetted and are robust and proven infrastructure.
The goals of the SDDC is to extend virtualization of the whole data center. Several of the rich
features make this ecosystem robust and able to handle the most rigorous of workloads with ease.
1. Extend virtual compute to all applications
2. Virtualize the network for speed and efficiency, also to introduce new layers of security
that protects the data center
3. Transforms Storage by aligning it with application demands, making storage more
personal to the VM or Container-VM.
4. Management tools are giving way to automation
5. Greatly reduces deployment times in the Data Center
This year in the first day general session Pat revealed the holistic vision for the SDDC, which is
portability of the SDDC and its policies, configurations, security, and services to any Cloud,
Private or Public. Removing the barriers for a lot of companies to fully leverage the hybrid cloud
model. It is a lot easier to adopt the public cloud when you know you will not be compromising
your corporate security and compliance requirements. The announcement of two new tools that
would complement the SDDC were also made, VMware Cloud Foundation, and SDDC
Manager.
VMware Cloud Foundation has the ability to completely deploy a new SDDC Data Center in
your Public or Private Cloud. In the public space it is fully supported or “integrated” you might
say with VMware vCloud Air and IBM Consumer Cloud, but with API and some tweaking you
are able to use Amazon, Azure or Google as well. The overall message is that there are no
barriers, if you don’t have the infrastructure go to cloud, if you have it go on premise until it
makes more economic sense to go public.
Enabling intelligent use of cloud infrastructure through the use of SDDC Manager Tool.
Provides reporting on the TCO of each cloud, or private once you enter your costs. SDDC
manager simplifies operations and provides a single interface that is easy to consume and make
choices from.
- Single management platform that brings it all together regardless of location
- Automates and simplifies deployments or scale out operations
- Rich support for Docker, Containers, and Volumes
- Private Cloud and Public Cloud – Hybrid Cloud Manager
SDDC Manager brings a rich feature set that really enables you to understand hybrid cloud, and
the implications to cost and availability based on your choices and investment in the tool.
SDDC and the new tools really brings the holistic vision together, making Hybrid Clouds truly
possible because it automates and streamlines the ability to setup services. The whole of your
SDDC is portable and allows for easy duplication and movement, scaling, duplication of critical
application sets across any of your infrastructure eliminating physical boundaries.
Cloud Foundation and SDDC manager streamline your Day 0 to Day 2 operations as well,
eliminating the timely process of human interaction to operationalize. It entirely integrates with
vRealize Operations for manageability, and analytics as well is ideal for control by vRealize
Automation.
The crux of the SDDC and the new cross-cloud services is that it is a tightly interwoven and
integrated ecosystem built entirely through software decouples critical data centers functions
from the physical aspect of the stack.
A discussion on SDDC would not be complete without discussing Network Virtualization
(NSX), this is because the ability to be truly portable is dependent on NSX being deployed.
There is a large adoption of NSX in the last two years with some very significant companies,
including insurance and healthcare adopting it. There is a large industry backing for it with lots
of 3rd
Party enhancements, companies like Intel who has partnered with VMware and McAffee
to build out a Heuristic Analytical Architecture that improves the resilience of the network and
adds deep inspection as a key value in the security space. Intel’s heart of this is their Security
Controller appliance that layers on top of NSX.
Companies like IBM have fully adopted the SDDC including NSX to power their enterprise class
consumer cloud, as well as VMware is using their own technology to power their cloud. This
means that to these cloud’s it is a seamless integration of your data center. The best part because
of this ecosystem it is having unlimited capacity on-demand.
NSX eliminates bulky physical infrastructure and minimizes the need to use physical firewalls.
Traditional networking is a slow an arduous process for most organizations, the amount of
firewall rules being supported manually in the enterprise, the manual analysis to ensure there are
no conflicts that will prevent the service from functioning properly. NSX breaks down the
traditional barriers, but enhances the robustness of your security and network, and shortens the
timeframes for deployment.
In the SDDC there is a lot of East – West Traffic, and NSX adds a layer of control across East-
West boundaries of the platform, truly extending the network and security benefits of the virtual
deployments. NSX is what streamlines the portability of VM/APPS, it attaches a
Security/Network profile that maintains the workloads critical settings, that allows for the
workloads to traverse not only clusters within the same data center but also traverse data centers
as the need arises, even into supported public cloud infrastructures.
Micro Segmentation of the VMs is no small benefit, NSX creates a virtual network that is
independent of the underlying IP network hardware. Administrators can programmatically
create, provision, snapshot, delete and restore complex networks all in software. VMware
describes micro-segmentation as the ability to “build security into your network’s DNA.” Intel
listed in a current solutions brief the 7 benefits of Micro-Segmentation:
1. No Ripping or Replacing - What you have in place VMware NSX runs on top of any
network hardware, so you don’t have to buy or replace any appliances. In addition,
there’s no disruption to your computer and networking infrastructure or applications.
2. Reduce Escalating Hardware Costs - Deploying more physical appliances to handle the
growing volume of workloads inside the data center is cost-prohibitive. Looking at the
capital expense alone, VMware NSX is enabling actual enterprise organizations to save
68%1. This savings is based on estimating what physical firewalls would cost if IT
administrators tried to approximate the same degree of control that micro-segmentation
provides.
3. Curtail Firewall Rule Sprawl - Bloated firewall rules are a real problem in security
management. Over the years, administrators can inherit unnecessary and redundant rules,
and there’s no easy way to figure out which rules are no longer needed. Firewall rule
sprawl can make security audits nightmarish. Out-of-date and conflicting rules can even
be an unintended source of security vulnerabilities. With micro-segmentation and
VMware NSX, policies are orchestrated centrally and linked to the VMs they protect, so
you can automate security policy management throughout the entire data center via a
single interface. When a VM is provisioned, moved or deleted, its firewall rules are also
added, moved or deleted.
4. Tune-Up Performance with more Efficient Traffic Patterns - With physical networks,
workload traffic is often required to traverse more than one network segment to reach
routers and firewalls, only to come back to an adjacent workload (an inefficient pattern
called hair-pinning). With micro-segmentation, traffic can usually stay in the same virtual
network segment, reducing the impact on the physical network. As a result, you eliminate
the extra costs and inefficiencies associated with over-subscribing core links.
5. Meet the Individual Needs of LOBs and Departments - Because VMware NSX and
micro-segmentation work independently of your physical infrastructure, you gain
tremendous flexibility in moving resources around and keeping security in lockstep with
change. Because security is handled through software, policies can be created and
operational within minutes, eliminating the lag time associated with installing more
security hardware or reconfiguring network systems. Figure 1 shows how easily you can
update security policies to match the needs of individual LOBs and departments. In this
example, the IT department has decided to virtualize the desktops throughout Human
Resources (HR). With micro-segmentation, creating and applying the security policies for
the virtual desktops for HR takes a matter of minutes. You simply tag all relevant systems
“HR” and VMware NSX automatically applies the correct security policies.
6. Add a Valuable New Knowledge Area for your Networking Specialists -
Administrators use the same skill sets that they have acquired around VMware
virtualization, so major security improvements don’t require a major learning curve.
Hardware networking specialists acquire new software skills that keep them at the
leading edge of both hardware and software networking security technologies.
Developing expertise in the Software-Defined Data Center (SDDC) and network
virtualization areas are a tremendous addition to the professional skills of network
administrators and architects.
7. Future Proof your Operations - Micro-segmentation makes securing workloads much
easier, faster and less expensive. As a result, you can support changes with greater
confidence, and even reallocate resources to new project areas. Network virtualization
with VMware NSX is also a significant—and non-disruptive— step towards the SDDC
model. Which means you’re not only strengthening security today, you’re laying
important groundwork for SDDC in the future.
Hyper-Converged Infrastructure (HCI) is the integration of the technology stack including
compute, storage, and network into a simple building block that is easily consumable within the
data center. It is a paradigm shift from the traditional compute, storage, and network silos of the
past. But it is an unspoken part of the SDDC, though not necessarily required, the SDDC 3.0
VMware Validated design assumes you are using vSan ready nodes, and products such as vxRail
or vxRack in the private cloud. In the public cloud there is no real assumption, but both IBM and
VMware vCloud Air are both based on that methodology.
HCI offers many benefits over traditional stack:
1. Simplified Build Blocks
2. With Lower TCO
3. Overall Performance
4. Shortened Deployment Cycle
5. Based on vSan solutions are best for SDDC
6. Ability to adjust scale of the building blocks to meet Data Center requirements in less
space than traditional.
7. Leverages a robust leaf and spine network topology
8. Continue to do logical segregation of Management, Edge , and Consumer workloads
9. Overall “Ease of Deployment”
For the on premise private cloud it is recommended to choose a building block so that you can
design the Data Center holistically and increase the ROI as well as consolidate the traditional
platforms by replacing them with HCI. One of the most recommended solutions that was brought
up in multiple sessions along with a vExpert session on SDDC was the use of vxRail and at scale
vxRack. This makes a lot of sense vs. traditional stack as shown in the below:
This would be an example of a Single Resource Unit (SRU) and what would be a standardized
building block. This particular building block is based on the VCE vxRail 280F which is an all
flash unit, and has the maximum configuration available from them. We of course can decide
what our standardized building block is by doing our own research and due diligence, for
example our SRU might be a Simplivity Unit, or a Cisco Hyperflex, or a self-built unit like we
use in our current vSan Clusters.
Once you have selected your building block you can begin to visualize what the resources you
can accomplish within how much footprint, as you will see in the example below we can achieve
a much denser cost effective footprint at Rack scale than we can with our current traditional
stacks of Compute, Storage and Network, therefore increasing our ROI and lowering TCO.
The comparison to Traditional compute is not even an equal comparison, since we have only
compute or only storage in our racks and to achieve the same capacity is close to a 5:1 ratio.
Steeply driving our costs up to do the business we need to do. We still appear to be a cost center
because managing the traditional stack is expensive and on top of that the pace of deployment in
physical is slow and costly as well.
On top of their being a revolution in the Data Center because of SDDC, there also is revolution
to Application Delivery, and not isolated just to the End-User Compute sector of IT, it impacts
application infrastructures with the same challenge to innovate and make a digital
transformation. Go from the traditional Application Infrastructures and traditional front end
clients in Server/Client compute to Cloud Native Applications. Cloud Native application
infrastructures just like EUC are decoupled from the OS, they run in there own space and get
their own resources. Good new is that they are still a VM from a management and lifecycle
perspective.
The benefits of this transformation is identical to the benefits of Hyper-Converged
Infrastructures, and this revolution is called Containers:
- Simplified Building Blocks
- Docker Compatible
- With Lower TCO
- Overall Performance Benefits
- Shortened Deployment Cycle
- Overall “Ease of Deployment”
- Intelligent Automation
- Designed for HCI and evolved for SDDC Ecosystem
VMware has two types of containers, one which runs on the current SDDC infrastructure, and
one that is platform integrated to the SDDC. VMWare Integrated Containers and Photon
Platform Containers. Containers is a representation of compute/storage/network/security from
the Application perspective. Its deep integration with the SDDC brings a lot of opportunities on
how we deliver, build, design, define, and deliver the application while improving its portability,
security, performance, capabilities, and light weight framework. The point is to eliminate the
BLOAT, not add to it. There are clear scale and economic benefits at all layers.
To understand containers benefits you have to reflect back on the holistic benefits of the
Software Defined Data enter ecosystem. At every level containers gains robustness. I would go
so far as to say Containers are VMware’s statement of their agility and commitment to providing
superior ROI to their customers. Like Pat said day one “…which way will you face? The obvious
answers is we will face forward together!”
Benefits that are derived from the SDDC for Containers:
- Intelligent Automation
- Up to 6x to 8x faster deployment cycles by eliminating complex processes around system
design, testing, deploying, configuring, or scaling
- Repeatable process, for containers these would be part of a pattern part of a workflow
delivered through vRealize Automation
- Increase admin productivity by up to 2x by automating day 0 through day 2 tasks such as
patching, updating, security hardening, and monitoring
- It’s (Containers) likely to reduce overall TCO of Application delivery by 30% to 40% by
decoupling the OS
- Eliminates hardware costs when delivered as a service through cloud infrastructure,
develop on private, potentially deploy public while maintain all profile configurations
such as security and network requirements
- Portability not just inside the Data Center, but easily outside to other Data Centers or
Public Cloud infrastructures
- Design with objectives and intelligent decision points to fully leverage available
capabilities and consume or deliver services
So what is a Container according to VMware, a container?
vSphere Integrated Containers (VIC) combines the agility and application portability of Docker
Linux containers with the industry-leading virtual infrastructure platform, offering hardware
isolation advantages along with improved manageability. VIC consists of several different
components for managing, executing, and monitoring containers. One of the critical
components is the Virtual Containers Host (VCH).
The Virtual Container Host (VCH) is the means of controlling, as well as consuming, container
services – a Docker API endpoint is exposed for developers to access, and desired ports for
client connections are mapped to running containers as required. Each VCH is backed by a
vSphere resource pool, delivering compute resources far beyond that of a single VM or even a
dedicated physical host. Multiple VCHs can be deployed in an environment, depending on
business requirements. For example, to separate resources for development, testing, and
production. - VMware
Integrated Containers for vSphere benefits:
- Automatically integrates to vRealize Operations, giving you the ability to setup
Monitoring and Alerts for them, create a dashboard specific to a group of containers,
report on the, and advanced Analytics.
- Ideal for use with vRealize Automation, for resource provisioning, management, and
deployment of containers.
- It’s an ideal building block methodology.
- Leverage existing tool sets to monitor, manage, and support them
- Containers console is fully integrated within the vSphere Web Client
- Fully leverage vSan and vVols
This has fully been visualized below.
The key to the portability of Containers and to realize the full value of them will be the adoption
of NSX and the ability to full virtualize the resources used by the container, so it’s not locked in
place. The container is a pinnacle of virtualization, taking full advantage of everything
virtualization has to offer, what we invest we will get back in the return from containers. The
next transformation or migration really is to the Cloud Native Applications, and the standardized
building block, and source of delivery will be the container. NSX also enables the ability for
Active/Active configurations provides the load balancing features and is also further extensible
to 3rd
party products to enhance security of our application infrastructures.
Some of the full feature sets that really provide value and resilience to containers are categorized
below:
Container
Management
Portal for Container administrators to manage the container, repositories,
images and hosts. Representing the whole lifecycle of the container.
Container Registry Enterprise registry to store Container Images, manage replication, and
allows for a role based access to control the access to the company’s
critical images.
Container Engine Docker API compatible, and deeply integrated into the SDDC and
vSphere.
There is already rich and diversified portfolio of services available that enhance containers and
add functionality, now we have the platform, and it has the ability to be anywhere we want it and
do that at any time we want to with no concerns for its security profile, or settings as those move
with it.
In conclusion, we really need to spend some time in reflection of what digital transformation has
meant for us, we need to deepen our commitment to the ecosystem, and overcome obstacles of
understanding if we truly want to realize the full value of the Software Defined Data Center.
“VMware provides the software platform that enables our customers to consume and deliver
applications and services that power their enterprise. Deep levels of integration and capability.”
–VMware
Our job as IT is now to deliver superior application capabilities, open up new opportunities to
improve our business. The ability to use the SDDC to decouple the application from the
infrastructure gives a lot of capability to the customer to deliver how and however we would like
to. It DOES require a new journey to begin, but the journey has rich rewarrds and ultimately
allows IT to be perceived as an innovator to business, not a cost center.
vRealize Operations and vLog Insight
I have been on the vCenter Operations Manager band wagon since the beginning. The ability to
have at you disposal a complete view of the whole the data center that had been virtualized has
so many benefits, from real-time monitoring and analysis to resolve an issue, to trending usage
and being able to predictively look at different outcomes and size a workload appropriately, to
the ability to model the future of the compute and see the impact of choices and how we can
proactively ensure capacity and availability of the whole platform holistically.
We immediately started taking advantage of the health statistics and being able to rack
performance to the real bottleneck. We setup alerts and created some unique views of some of
our most critical resources. We worked within the boundaries of only having standard at that
time, such as not being able to customize our view to look at multiple servers at once as an
application set, but we could pull analytics for each of the servers in the application set and
manually review them we could look at very long time frames or narrow down the views to very
specific metrics we wanted to analyze. We could to Capacity Modeling and Analysis even
though they did not have the ability to apply the reservation to the resources, so you had to do
some further manual analysis.
But we had between vCenter Operations and vCenter statistics platform native information from
Host to VM. We also had several other scripts in PowerCLI and Pearl we used, and could do
more traditional analysis as needed for validation, or automate the as-built documentation of a
cluster.
The next level offered so much more so I put together a presentation of our current licensing and
the features we had and their limitations and that the uplifted versions had had much more
capability. Through that vessel and several discussions over capabilities we made the decision to
go to the next level of licensing that would allow for custom views, super metrics, highly
available, and free updates to next versions. This also allowed for greater levels of automation
through what at the time was called vCAC or vCenter Automation as well as together integration
between the two products and vCenter as well. We also made the decision to go to the next level
of support which is Business Critical Support which assigned us a specific Engineer and access
to the vExperts and advanced support capabilities VMware offered as a premium service. One of
the first task we did with them was go over our whole implementation and our current data with
them and talk holistically about our overall goals and our journey to a VMware Validated Design
2.0 deployment. We met with them on all areas of our network and also met their vExperts for
each and validated we had done the right thing. We included the whole team as we wanted
everyone to feel pride in what they had accomplished and what the next steps were.
VCOps changed how we ran the team and all data from VCops was reviewed ata high-level each
Data Center each morning. We used the scoring system to represent a report card. We lso ran
weekly scripts that entirely documented the infrastructure, vSphere Platform Health and Capacity
and performance reports. We went through VCops and we enabled a lot of the reporting at each
cluster level that allowed us to capture:
1. Oversized VM Report
2. Undersized VM Report
3. Cluster Health Summary
4. Resource usage reports
5. Capacity Remaining
There are several reports more we could select but wanted to stay limited at the time to pertinent
data that would have a positive effect on our operations. As I have a statement that any of my
team would tell you “If the house is on fire then I don’t care about the add-on bedroom you are
working on” All resource methodologically will engage to assist on resolution, and their first
sources of data were being powered by VCOps and information analyzed through it.
In 2014 VMware made an acquirement and they depreciated the older product vCenter
Operations Manager to vRealize Operations Manager and took the best of both worlds from the
acquirement product’s best features for virtualization, and their superior engine, and from vCops
the look and feel, the customizable views, and all of the platform specific features reports,
workload analysis, and default views and combine the to make a “Best in Breed” native tool that
was entirely integrated and completely extensible. We of course were immediately sold on it
since there was no additional license costs and the upgrade was free to us. It brought much need
features and functionality as well as higher levels of integration and analytics but also some
fundamental changes in its architecture and the task of moving over the existing data so we did
not lose the timeframe of retention six months that introduced some complexity. So we agreed to
bring in VMware professional services to collaborate with us on the solution, we are not a sit on
the side line organization, so we just needed a good engineer with the specific skill set with the
product to assist us in design and deployment of the new solution and integrate it to the SDDC.
The upgrade went smoothly the engagement with VMware went perfectly, and we customized
several dashboards, and how to fully leverage the different aspects of the tool. We had included
training in the engagement, and all of the documentation to support each step and provide good
future useable collateral. We are always excited about the next uplift because we know it is
going to bring more capability, we are equally excited about the extensibility through
management packs to connect to our other platforms we rely on for detail statistics and visibility
of the whole stack.
This year I was not disappointed they announced some great new features being released with
the 6.3 version of vRealize Operations (vROps).
So this year we came to VMWORLD to take it further, to understand the path for vROps, the
features that were released this year:
1. vLog Insight became part of the vCloud Advanced Licensing, so we have the full
version for every host in our environment. It allows for us to use unstructured data to
support our engineering and operations on the platform. This adds the remaining layer of
integration we had hoped for. So now with vROps we can analyze, alert, monitor, action
upon all data available including what is in the logs of the hosts.
a. This will help with RCA Analysis and provided a heightened level of insight into
the platform, and it will also become the source that host and other component
logs will be stored.
b. We can write workflows that are triggered by vROps based on fixed metrics and
unstructured data, leveraging the close integration of VROps to vRealize
Orchestration (vRO) and vRealize Automation (vRA).
c. Splunk does not integrate to vROps and is not the native tool on the SDDC for
analysis of unstructured data, therefore to remain in line with our commitment to
the SDDC 3.0 VMware Validated Design (VVD) we need to use vLI and that
they are duplicate featured tools, the difference being that vLI is integrated with
VROps and the SDDC, Splunk is not.
d. There is a cost benefit that also goes with vLI, we own it as part of our VMware
licensing.
e. Furthers the holistic goal of the “Self-Healing” data center
2. Automatic Workload Placement has been introduced into the vROps capability set and
this allows though tight integration with Distributed Resource Scheduler (DRS) that is
part of the vSphere Cluster that ensures the amount of resources committed are available
or moves the VM to a host that does. So because of the tight integration with vSphere
they are able to deliver the policy driven workload placement technology that fills the gap
between DRS at the cluster level and resource availability at the SDDC level.
a. Automatic Workload Placement is a manually engaged activity
b. VROps closely analyzes benefits and trends to move the best workloads to the
less utilized cluster
c. Initial placement on the cluster and ongoing resource management is done by
DRS and occurs every 5 minuntes (15x20 Second Time Slices), 20+ VM Metrics
are checked as well as 5+ Host Metrics to determine where within the cluster is
the best placement based on the usage required.
d. vROps queries data from vCenter every five minutes, and receives 15x20 Second
Interval Time Slices
3. Actionable Events gives the ability on some alerts and warnings there will Action button
so that you can take action and use the recommended or self-set setting and reboot if
necessary, mainly this applies to resource management of VMs. The action happens
immediately so needs to be carefully used and make sure any changes being made have
been through proper change management and other appropriate process (VM Uplift) prior
to acting.
4. Hardening Guide Automation is another feature they announced, so you can upload the
Hardening Guide that you have configured your security stance, and settings to action
upon the items within the environment. You could of course use that capability to add
custom items as well. But another part of the resilience of the SDDC and another security
control in place.
5. Capacity Modeling Reservations which allows you to put in your current capacity
requests and the cluster will balance accordingly waiting for the worklosds to become
“Real”
Of course one of the drawbacks was you could not schedule the action, but through proper alert
actions you could automate and schedule the requirement using vRA and vRO. Which there are a
lot of the common tasks performed in vSphere already setup as a workflow to action upon in
vRO, so there is a lot of opportunity to leverage data from vROps to also do automated actions.
But the point is that VROps is the brain of the SDDC with deep integrations with all of the
products and cross integrations between other products in the SDDC. With management Packs,
and Hyperion you can further integrate your infrastructure and be able to see that data in vROps.
As an example IBM AIX has a Management Pack that plugs into VROps and allows you to
report on that environment. We also have integrated EMC MPs so we can gain visibility to the
underlying arrays and storage.
So it was time to take a look under the hood and really understand how it all work. Let’s dive in
and really see what the new VROps was about. We had it deployed, and we are using a lot of the
capabilities of VROps, like Capacity Management and Modeling, doing platform/workload
analysis, waste reclamation, incident response and resolution, and assisting in root cause
analysis. We are doing in-depth reporting weekly so we can look at the different aspects of the
platform and assisting in make recommendations on health, performance and capacity. We are
managing our whole alerting system and NOC Integration for improved response and faster
resolutions. But were we fully leveraging all of the capabilities of vROps, and getting the most
value from it as we could?
I spent some of the time with a vExpert over in the VMware Pavilion in the solutions exchange, I
told him starting out I wanted to cover any changes in the infrastructure design of VROps, and
then discuss the new features of the product in the 6.3 release. I also had a couple of sessions
which focused on this release and some of the features.
I wanted a more holistic approach, and having worked on enterprise tools in the past was curious
when they would go to a Master / Collector architecture. So recently I had looked at the SDDC
Reference Architecture 2.0 and discovered that indeed there was a recommended better
approach, a distributed approach with a single pane of glass.
So I went to the vExpert and I did not initially reveal to him what I had leaned and referenced on
my own, as I wanted to get his input based on how we are currently deployed.
I explained that currently we had 2 distinct deployments in HA Pairs, and while this had been a
design decision made by our vROps consultant when he came out to do the deployment I have
since questioned the logic of the disparity. Though we are able to get data we have to go to the
Data Center in question to get data about that Data Center. He immediately explained and white
boarded on his whiteboard the distributed approach, with a VROps Cluster at the main site, and
collectors at the remaining site, using SRM to insure the VROps master infrastructure to the
secondary site. I did not take a picture of the whiteboard but looked like this:
As shown in the depiction you would want a Master and Collector methodology for the vROps
components, but there is a distributed Master/Worker methodology for Log Insight. SRM could
be used to support DR for the VROps core infrastructure, which would also be the strategy for
the vRA infrastructure. The underlying software infrastructure of VROps shows a robust layered
design, in the SDDC vROps sits front and center, the “Brains” of SDDC:
VMWare describes vRealize operations as a general purpose analytics engine which can take
inputs from multiple source within the SDDC or through management packs a variety of other
platforms and technology and produce meaningful reporting and data from those sources.
1. Storage Management Packs for most leading vendors (HP, EMC, HDS, NetApp)
2. Hyper-converged Infrastructure, especially vSan ready HCI
3. Traditional Hardware through Management Packs or Hyperion (HP, DELL, IBM, Cisco)
4. AIX / Power VM
5. Other 3rd
party solutions that have taken advantage of the SDK or APIs to provide an
appliance with additional inputs
6. vRealize Automation /vRealize Business are tightly integrated with VROps
Having used many different third party tools in the past, I can confidently say that vROps with
its extensibility and built-in features out-stripes other tools. We run a lot of scenarios through
vROps and consume it on a daily basis to ensure that everything in the Data Center is well
protected. It has shortened our times to resolution on issues, and has given us valuable capacity
information that we have used to forecast our needs and be able to “Roadmap” our requirements
over the last two years. Due to its tight integration to the SDDC and specifically vSphere we
have enjoyed un-paralleled view of the enterprise we have realized a real ROI in using this tool
in our environment.
It is helping us to meet some very key IT challenges utilizing its metric analysis and intuitive
reporting:
1. More Control
2. More Agile
3. Data Overload
4. Over-Provisioning
5. Under-Provisioning
6. VM Sprawl
7. Enables the SDDC at every layer
Now with its integration with Log Insight and vRealize Business we can take the tool to new
levels in our ability to analyze and cost the environment and realize a deeper value. Integration
of Log Insight provides the ability to input unstructured data (logs) and do analysis and write
rules based on entries in logs to enable further actions and provide more comprehensive visibility
into issues, which will further shorten our time to resolve incidents. We will also realize better
protection of log information as well. The addition of vRealize Business to our Licensing model,
and tight integration with vROps adds business intelligence to our repertoire of capabilities.
Enabling us to validate cost models, and drive appropriate cost recovery for services to our
customers.
VROps also is intelligent and will alert you if sources of data stop collecting data, This will
allow you to ensure all sources are accurately reporting, and if not identify the source of the
issues quickly. Its extensibility enable many sources of data that can integrated in vROps
analytics and reporting. Offering new layers of information, allowing visibility at all layers
enabled.
Adding in new features like Automatic Workload Placement really drives a deep integration with
DRS on the clusters. This integration adds intelligence at the Cluster level, making clusters
happy, just like DRS offers at the VM level, making VMs happy. Policy driven and manually
enacted imbalances in cluster compute resources can be remediated fairly quickly.
VROps has also added a DRS Dashboard where you can see how each cluster is configured and
balanced, and make changes to DRS settings right from the dashboard. Workload Utilization
shows the graphical map of how the clusters are utilized giving you more insight into what is
going on inside the clusters.
After configuring Virtual Data Centers or policies in vROps it will be able to action upon cluster
workloads when enacted. The policy allows you to also exclude VMs from this activity if you
have special requirements for it to remain within a cluster.
Also now once you upload the proper hardening guide from VMware appropriately configured
you can automatically remediate issues with the security posture of VMs, Clusters, and the whole
Data Center. You will receive actionable alerts when there are compliance issues.
Now in this version you can not only model capacity for clusters, but also for VMs and have
these reservations remain in place and will be used in determining further capacity of the cluster,
allowing you to see current utilization as well as computed utilization of the reservations. This
will give a true measure of the remaining capabilities ofa cluster allowing you to remediate any
constraints. You can also set the date that the reservation will occur so that it will automatically
remove it as it has been realized.
Within the capacity module you can also explore reclaimable resources prior to adding new
resources to a cluster. Allowing you to action upon those items from vROps and manage your
resources.
VROps continues to sort information into three areas Health, Risk and Efficiency. Health is the
immediate issues impacting the data center today that need attention, it also can show under-
sized resources for VMs. Risk is the time remaining and forecasted loads, a more predictive
component allowing you to proactively address resource shortfalls prior to them becoming real
shortfalls impacting the environment. Efficiency are the opportunities to reclaim resources in the
environment and to understand if there is waste in the environment that needs reclamation.
The extensibility of VROps through management packs opens up new avenues of data collection
and enterprise visibility. One of the management packs I learned about (collection of MPs) is
Blue Medora MPs. They have a rolled up suite now called TVS, True Visibility Suite which
further enhances vROps by adding the following support and reporting:
In summary VROps brings a lot to the platform with deep integrations with every layer of the
SDDC. It provides a rich feature set that provides unparalleled out of the box (OOB) capabilities
but provides superior extensibility to integrate new functionality above and beyond the powerful
tools native feature set. We have many times over seen the ROI of vROps and the use of its
functionality in our organization. Not only is the tool native but stands apart from other tools
because of other reasons:
1. Intelligent and Predictive – Today’s IT Application Infrastructures are complex and run
across multiple tiers of software and hardware. vROps understands that and easily adapts
to and supports virtualization, hybrid cloud, and complex multi-tiered applications.
VROps was designed to be intelligent and predictive able to analyze platform
requirements and make intelligent and useful recommendations and actions.
2. Smart Alerting – vROps identifies root cause of issues and notifies administrators to the
issue at the same time filtering out unwanted noise (extraneous notifications) and makes
recommendations for remediation. The overall benefit is less time spent in problem
resolution.
3. Policy Driven Automation – vROps comes OOB with several policies which can be
customized to quickly begin realizing value as it makes it easy to automate monitoring
and guiding remediation of issues. VROps is immediately able to recognize and integrate
to the vSphere infrastructure and begins self-learning due to its anomaly based
monitoring.
4. Enterprise Vision and Comprehensive Execution Capabilities – Tool proliferation is
often due to the closed nature of tools and being only able to monitor and act upon a
single focus area. With vROps on of the greatest differentiators is that it supports a very
extensible platform that has allowed for a rich 3rd
party Management Pack (MPs)
Ecosystem. By implementing other MPs vROps can capture and analyze data from
Application all the way down to the Storage and Network layers. Once data is analyzed it
is presented via the unified dashboard.
5. Enforce Compliance – With its ability now to receive hardening guide upload it ensures
that from a security view that the Hosts and Servers running are as hardened and secure
as possible. Issues arising that need remediation are brought up to administrators for
remediation.
6. No Hidden Costs – We get vROps with our licensing, and each licensing level has a
clear set of capabilities. For example management packs cannot be used in the standard
version, this is made clear, the ones available for Advanced/Enterprise can easily be
found in the marketplace, several are free but if there is a cost it is clearly notated. OOB
vROps has unparalleled ability to monitor, report, and maintain visibility into your
vSphere environment.
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016
A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016

Weitere ähnliche Inhalte

Andere mochten auch (6)

VMworld 2015: The Future of Network Virtualization with VMware NSX
VMworld 2015: The Future of Network Virtualization with VMware NSXVMworld 2015: The Future of Network Virtualization with VMware NSX
VMworld 2015: The Future of Network Virtualization with VMware NSX
 
Emc vipr srm workshop
Emc vipr srm workshopEmc vipr srm workshop
Emc vipr srm workshop
 
VMware NSX for vSphere - Intro and use cases
VMware NSX for vSphere - Intro and use casesVMware NSX for vSphere - Intro and use cases
VMware NSX for vSphere - Intro and use cases
 
Reference design for v mware nsx
Reference design for v mware nsxReference design for v mware nsx
Reference design for v mware nsx
 
VMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep Dive
 
An Introduction to VMware NSX
An Introduction to VMware NSXAn Introduction to VMware NSX
An Introduction to VMware NSX
 

Ähnlich wie A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016

Ähnlich wie A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016 (20)

Virtualization Spurs ERP Operations and Disaster Recovery for Sportswear Gian...
Virtualization Spurs ERP Operations and Disaster Recovery for Sportswear Gian...Virtualization Spurs ERP Operations and Disaster Recovery for Sportswear Gian...
Virtualization Spurs ERP Operations and Disaster Recovery for Sportswear Gian...
 
Le Moyne College Accelerates IT Innovation with help from Local Solution Prov...
Le Moyne College Accelerates IT Innovation with help from Local Solution Prov...Le Moyne College Accelerates IT Innovation with help from Local Solution Prov...
Le Moyne College Accelerates IT Innovation with help from Local Solution Prov...
 
Cloud Powered Services Delivers Revenue Growth and Business Agility for SMB T...
Cloud Powered Services Delivers Revenue Growth and Business Agility for SMB T...Cloud Powered Services Delivers Revenue Growth and Business Agility for SMB T...
Cloud Powered Services Delivers Revenue Growth and Business Agility for SMB T...
 
Microservices for Java Developers
Microservices for Java DevelopersMicroservices for Java Developers
Microservices for Java Developers
 
Ghosts of technology
Ghosts of technologyGhosts of technology
Ghosts of technology
 
Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...
Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...
Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...
 
Microservices for-java-developers
Microservices for-java-developersMicroservices for-java-developers
Microservices for-java-developers
 
Mr WIAP is Your Friend
Mr WIAP is Your FriendMr WIAP is Your Friend
Mr WIAP is Your Friend
 
Thomas Duryea’s Journey to the Cloud: Part One
Thomas Duryea’s Journey to the Cloud: Part OneThomas Duryea’s Journey to the Cloud: Part One
Thomas Duryea’s Journey to the Cloud: Part One
 
SFWelly dreamforce wrap up September 2023
SFWelly dreamforce wrap up September 2023SFWelly dreamforce wrap up September 2023
SFWelly dreamforce wrap up September 2023
 
Cisco Cloud White Paper
Cisco  Cloud  White  PaperCisco  Cloud  White  Paper
Cisco Cloud White Paper
 
CARMS - Entrepreneur inc
CARMS - Entrepreneur incCARMS - Entrepreneur inc
CARMS - Entrepreneur inc
 
Mobile + Cloud+ Big Data = Digital Win
Mobile + Cloud+ Big Data = Digital WinMobile + Cloud+ Big Data = Digital Win
Mobile + Cloud+ Big Data = Digital Win
 
GIDS 2024 Delegate Dossier.pdf
GIDS 2024 Delegate Dossier.pdfGIDS 2024 Delegate Dossier.pdf
GIDS 2024 Delegate Dossier.pdf
 
T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...
T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...
T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...
 
Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...
Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...
Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...
 
HP Vertica General manager Sets Sights on Next Generation of Anywhere Analyti...
HP Vertica General manager Sets Sights on Next Generation of Anywhere Analyti...HP Vertica General manager Sets Sights on Next Generation of Anywhere Analyti...
HP Vertica General manager Sets Sights on Next Generation of Anywhere Analyti...
 
Cloud and SaaS Force a Rethinking of Integration and Middleware as Services -...
Cloud and SaaS Force a Rethinking of Integration and Middleware as Services -...Cloud and SaaS Force a Rethinking of Integration and Middleware as Services -...
Cloud and SaaS Force a Rethinking of Integration and Middleware as Services -...
 
Private Cloud: Debunking Myths Preventing Adoption
Private Cloud: Debunking Myths Preventing AdoptionPrivate Cloud: Debunking Myths Preventing Adoption
Private Cloud: Debunking Myths Preventing Adoption
 
CloudCamp Chicago - Cloud in Action
CloudCamp Chicago - Cloud in ActionCloudCamp Chicago - Cloud in Action
CloudCamp Chicago - Cloud in Action
 

Kürzlich hochgeladen

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Kürzlich hochgeladen (20)

EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 

A Hitchhikers Guide to the SDDC Galaxy - VMWORLD 2016

  • 1. A Hitchhiker’s Guide to the SDDC Galaxy: VMWORLD 2016 Prepared By: Michael Knight 09/01/2016
  • 2. Contents Introduction..................................................................................................................................... 3 History of our SDDC Journey......................................................................................................... 5 A trip through VMWORLD 2016................................................................................................. 10 General Session Day 1 – Competitive Advantage in the Multi-Cloud Era .............................. 11 General Session Day 2 – Competitive Advantage in the Multi-Cloud Era .............................. 17 Breakout Sessions Summary..................................................................................................... 27 The Software Defined Data Center (SDDC) ........................................................................ 29 vRealize Operations and vLog Insight.................................................................................. 41 vRealize Automation and vRealize Orchestrator.................................................................. 51 vSphere 6 Platform Enhancements ....................................................................................... 61 Solutions Exchange – Tuesday/Wednesday Afternoon with the Experts................................. 71 IBM....................................................................................................................................... 71 HP ......................................................................................................................................... 71 CISCO................................................................................................................................... 71 INTEL................................................................................................................................... 71 SIMPLIVITY........................................................................................................................ 71 ZERTO.................................................................................................................................. 71 TURBONOMICS ................................................................................................................. 71 VMWARE - vExperts........................................................................................................... 72 Conclusion .................................................................................................................................... 73
  • 3. Introduction To start with the title in no way is original, in fact it was inspired by a title of one of the sessions this year. So immediately before I went to VMWORLD 2016, I was inspired by it. This year one of my most anticipated work events has been VMWORLD 2016, I had setup for my team several trainings, from VMware and from Cisco. But as of yet had not had an opportunity except my constant reading and investigation of platform technology. But I have a team of experts to rely on. I have transitioned most of my training now to Management Training on our Web Based Training System. Even then time is the commodity I think the Server Engineering team has the least of. It takes a lot to manage, and deploy platform resources and properly direct the team effort to achieve the goals of the organic growth and project demanded resources of the company. We manage 250+ virtualization hosts, and manage ~3600 VM workloads, and that does not include infrastructure support for VDI. So for me this year I felt this was the most critical training and experience I would get this year, there was no training, or conference I would rather have had the chance to go to. I was also extremely excited to share this experience with my practically my whole team including my manager was the icing on the top of a perfectly made cake. I have for the last four years with an amazing team of people worked on our digital transformation to the Software Defined Data Center. One of the things I was really looking forward to was that now that I was @ a leading healthcare insurance company, I only had to focus on my goals for our platform, so would be able to really get what I wanted out of VMWORLD. As I flew into Las Vegas with anticipation of all of the great revelations that would happen this year that it would be the next revolution, after all what could be grander than Software Defined Data Center they have pioneered, so it would have to be something big. As soon as I hit the ground I grabbed a Taxi and headed out to the Hotel to get my room, and much more importantly make my way to the VMware WMWORLD 2016 Registration. I made it to the Luxor, which is connected to the Mandalay Bay Hotel where the conference is being held, and I saw blue VMWORLD 2016 Backpacks everywhere. It was obvious that there would be a large attendance this year. I checked in and dropped my stuff off in the room, and made my way over to the conference, with computer geeks surrounding me on all sides, once I had registered the lady mentioned to me they were expecting upwards of 30k of us. 30,000 Geeks + VMware + Vegas = One hell of a good time! I spent the first night with the team and we ate at a Gordon Ramsey Restaurant, everywhere you looked you saw tech professionals who had made the migration and where in search of food and
  • 4. libations. We enjoyed the dinner and discussed among the team our target of realizing fully the value of the conference, but to make sure we come back armed to help advance Blue Shield of California platform to new heights. We wrapped up our kick-off dinner and all headed out to our respective destinations for the night. I made it back to my room, and rushed to bed excited of the next day events. Visions of Hyper-Convergence, SDDC, Private and Hybrid Clouds swirling in my brain as I drifted off. The next morning proceeded over to the conference and got in the line for the opening general session, and when I say line it was only sort of one confined by the walls of the hallways, 10 people across and because I was early and near the beginning going back as far as the eye could see. The doors would open just prior to the start of the General Session ad we would all filter into a room the size of two football fields. Music was loud and penetrating with a deep bass and current variety of house music playing, as I made my way to the front of the center of the room. Lucky me would get a first-hand view and would not need the monitors to see the speaker. The hired DJ in a Sphere contraption that was half DJ stand and half drums continued to mix out the high energy beat as the keynote started. Boom (drums), Tomorrow (Voice), Cloud…. Digital Transformation, the journey we have been on, we as a larger identity of IT. And tomorrow was here, the Cloud Era was not a Fad, it is now the standard. Cloud is the innovator, whether it is your own internal private cloud, or a public cloud, or the use of both in Hybrid- Cloud infrastructures. Companies are adopting at a phenomenal rate. Lots of companies have it as a top priority to complete their digital transformation. This year the trend would continue with the Software Defined Data Center (SDDC) and the leverage of those benefits, and new benefits being announced. But the new direction was that we were all mature enough for “Hybrid-Clouds” and Cross-Cloud capabilities.
  • 5. History of our SDDC Journey I thought of when I started this opportunity with a handful of clusters and a few hundred workloads, to the new colocations where we have driven 90% virtualization and now manage thousands of workloads, we are certainly digitally transforming. We like many others started this effort 3 years ago, us like many of them have made significant progress in our journey. But there was still further to go, more to understand, faster adoption to occur and tomorrow was certainly here. In the four years almost I have spent here, I have met and been able to work with some of the greatest people I have had the pleasure to work with. We took a huge journey together, and below is that story. My history was as a Sales Engineer, Solutions Architect, a Solutions Consultant, and a Professional Services Engineering Manager for technology integrators. What those roles had taught me was that Architecture and Planning were essential, that always following a vendor’s recommended reference architectures would typically cover 80% of the solution, and you would engineer the other 20% as a unique quality of the company you were doing the work for. It also taught me that the key to success is the human resources that were collaborating with you. My focus over the last 20 years has been Virtualization, starting with End User Computing (Citrix) and later VMware (vSphere and Horizon), the other areas of expertise I had gained over time because it was necessary to bring the whole stack together, and often times as a consultant you are running solo on a project or POC, was storage (EMC, IBM, HP, NetApp) and networking (Cisco, and HP) and hardware I worked with all of the providers out there (HP,IBM, DELL, Cisco). Other things I picked up was specialized tooling and scripting within my areas of focus. So when I landed a job with Blue Shield (Nice to move back to North Cali  ) I immediately began assessments of those two platforms and began to evangelize the appropriate architectures and designs to get the most out of these platforms. This has been a journey, one I am happy I took. I was skeptical about working for a non-IT company, but was going to help to forge my destiny here. For VMware I turned to VMware Validated Designs, and my own expertise to begin the journey properly. I did do some Citrix work along the way, but that is a different experience for a different time. Below is a brief history of the 4 years we have been on this journey, we have a long road behind us, but also a long road ahead. But we are ready and well-armed to accomplish this task. Forgive me if I missed any details, this is a summation of my reflections on my trip to Las Vegas. I truly appreciate my team members, past and present for without them we would not be where we are at today. Thanks for all of the hours, all of the commitment, all of the intellectual contributions, and most of all thanks for the team work! The journey for us started in our legacy data centers back in 2012, under different leadership. When I arrived I knew we had some distance to go and my work was cut out for me. We were approximately 20% of all workloads were virtualized and 80% traditional compute. I spoke with
  • 6. management and they definitely wanted to get it all virtualized and really begin to enjoy all of the benefits of Virtualization. As a first step we did an assessment of the virtual platform we had, we hired a Capacity Manager at my recommendation, and we began in earnest P2V projects to move our underutilized traditional server workloads into our new virtualized environment. I was also able to convince my leadership that we needed to have a person to do capacity management so we were not just taking a shotgun approach to our infrastructure. We also started directing all projects that needed compute to the VMWARE platform. We did some immediate refresh of cluster hardware, ordered more nodes, and consolidated and removed redundant cluster sprawl. The same year we also made a huge concentrated effort at getting Tivoli tooling and SNMP monitoring for the VMware platform, but unfortunately you don’t win them all. By the end of 2012 we had refreshed, standardized, and grew the virtual platform, we had a virtualization ratio of nearly 60%! The platform itself went from a handful (30 Hosts) to about 100 hosts. We had approximately a 10:1 virtualization ratio at this point. We re-organized in 2013 and I was moved under IT-Infrastructure Platform Engineering and converted to a FTE from a contractor. I immediately began working with my manager at the time to build a Virtualization practice and to move or hire good people to be a part of it. The work to be done would be much more than one person could accomplish. I was lucky to have a manager that empowered me, and told me to build the platform that I would be proud of. So with his blessing, and some great people we set forth to accomplish this multi-year transformation. We continued to virtualize and support new compute requirements on our virtual platform. Capacity was still being done manually but it was right and we were able to track the reservations and had a good expectation from the platform on what we could deploy and what that would look like. We had gotten some additional hardware in 2013 for the platform and we built out our first true purpose built clusters the PERF cluster with the intention of managing only DB workloads here and through DRS rules align with our licensing to ensure we had capacity and compliance. We also continued to move workloads into what we felt were more appropriate clusters, and renamed the clusters to represent purpose better; APP, CORE, PERF, CLOUD (Cloud Director) and EDGE. By mid-2013 we had been told we would be migrating out of our LEGACY Data Centers and to new Colocation Data Centers we would build out. We immediately began planning what that would look like and that we wanted to leverage as much of the Software Driven Data Center reference designs as possible. By the end of 2013 we had ordered the equipment for our new Data Centers and finished remediating the Legacy Data Centers to ensure as seamless a migration as possible. We planned out all of the infrastructure for the Colocation Data Centers in the months to come. We finished the year at approximately 70% virtualized. The focus in 2014 was a balance of building out the new colocation data centers, but maintaining business as usual as well. We brought up Sacramento first and as soon as we had it fully operational it was completely tooled, and SRM/VR ready for migration. Within the first couple of months of operations we started getting our “New” workloads in our first colo. These were not legacy migrations but new server requirements for new projects coming in the pipeline. We stopped directing build activity in the legacy data centers because we wanted to limit the amount of work we would need to do to migrate them. We implemented vCOps (vCenter Ops) as our standard tool as it allowed visibility from the DC to the VM, and had the ability to do analysis,
  • 7. reporting, and capacity modeling. The best part it was a native tool to the platform. I also implemented a weekly run of three scripts that I had used all of the time at client sites when I was consulting. Drove a weekly meeting with VMware Account team, which the partner/customer relationship was sorely suffering, to ensure that all the steps we took had vendor involvement and any potential recommendations. For the most part it was just validations that we had it all aligned right, and we were using a supported methodology for deployment that would make it easy for us to service our clients, but support to service us. It was the calm before the storm for the migration and I used that time to make sure my team had all of the resources they could need to be successful and had input into what we were building so they had an investment in its success. The team had grown in size to 10 including myself. We finalized the building out of the colocation facilities in 2014, we even had migrated some of the critical application infrastructures. Other IT teams were working on Pure Application systems that were being deployed to Legacy campuses. Eventually they too would migrate to the colocation facilities. I had some talks on automation, and that the future of the DC was going to be automated, I shared my vision of a self-healing DC that leveraged vCO (vCenter Orchestrator) and vCenter triggered alarms or warnings. We signed up as a team (3 FTEs) to go to VMWORLD together, very exciting, we wanted to hear firsthand about others, and their journeys to the SDDC. We completed the year 80% Virtualized. The following year 2015 started in a fury, we all had a goal and it had to be done…Migrate everything into our new Enterprise Data Centers. I deployed a multi-site SRM strategy that allowed each site to migrate through SRM to any other site, I also deployed VR (Virtual Replication) and eliminated all array based replication after getting our QoS set correctly. I recreated all of the existing plans we had, and now the VM were replicating regardless of datastore location. We deployed our first vSan Clusters replacing each of the branch sites with them eliminating the traditional compute/storage stack and doing it for a lower TCO. We also had VMware come out and do a Health Assessment with Hardening Guide since we wanted to ensure our new Data Center Strategies were right in line with industry standard but more than that were with the vendor validated design. We did have some mild remediation to be done, but all in all we were blessed by VMware and told that we had not only a very robust and resilient architecture, but we had an incredible team of experts and the engagement had been a beneficial one for both. We later in the year did a vROps (vRealize Operations) project since vCOps had been depreciated and was being replaced by the former. That went quite well and we fully deployed to all vcenters and were collecting metrics, and had migrated our alerting system to it as well. We had several custom dashboards created, and we plugged into our EMC Arrays so we would be able to traverse the whole stack minus, H/W and Network. We had all components installed including Hyperion which would allow us to tie into physical infrastructures as well. I worked on getting all of our licensing updated that year to vCloud Advanced Licenses so we would have the ability to fully leverage the tooling on the platform and have visibility and actionable data. We battled to the end on the Legacy Migration, migrating the last 500 workloads in the last three months of the year! Very impressed with the team, very satisfied with the platform and tooling we had deployed. I had done some other VM Work in the at our external data center that year decommissioning the legacy virtual environment and building a new virtual environment to meet all of the demand, and to also virtualize our Citrix FI servers. Sent a few to VMWORLD, but missed it myself this year. But at the end of the year we had done it, done all of it, the migration, got all of our Licenses requested, the heightened level of VMware Support,
  • 8. got our native tooling fully deployed, and got the hardware we asked for. Real satisfaction at a job well done, looking forward to the next year and the migration of the external data center. We had worked too many hours to count, had poured our talent, and our confidence into the two new data centers and they worked. The report I got from VMWORLD 2015 it was more of the same as the prior year, working on the Digital Transformation, not un-exciting, but a validation we still firmly had our feet on the right path. The resources I sent appreciated it, and I got updates online by catching the sessions, or in our bi-weekly meeting with VMware and our new Business Critical Support (BCS) relationship with them. This year, 2016 started a lot calmer than the prior two years. In fact I remember thinking in January near the end that it was too calm. But that was ok to, I wanted to focus on Standards, Processes, Documentation Updates, Platform Advancement, a better understanding and relationship with our tooling, and MOST importantly integration to the Network Operations Center like any other Enterprise Platform. Waste reclamation was also a hot item for me, I wanted to make sure we started to eliminate waste in our environments and evangelize the impact to both the workload oversized, but the whole of the workloads that are in a shared cluster or on a host. I wanted to take a deeper look into advanced analytics, and automated workload placement. I had meetings with both Cirba and VMTurbo now Turbonomics about automatic workload placement and automatic resource balancing. Eliminating the human factor in trying to fully balance between clusters properly, the ability to fill the gap that DRS has which is it only works in a cluster. About this time Derek gave his notice, which admittedly was of no consequence to the Virt Team, other than we would have a new manager. I had the team and myself continue our activities to build out the needed capacity according to our current roadmap for Cluster configuration. We continued to refine the metrics in vRops and had a waste reclamation meeting with the design engineering, before we knew he was to be our manager, and shared candidly my concerns for the environment, and that we needed to make sure all players understood design for the virtual platform and were not just throwing away resources and impacted overall platform performance. Then the re-org happened, and to our great benefit we now, Design Engineering, Server Engineering, and Storage Engineering would be under one common management and be a collaborative unit. This could only be good for the platforms, the teams, me. I now have new platforms to run as well as VMware, now AIX, Hardware, Pure Application. The key will be making sure we make the right strategic and tactical decisions were being made and to ensure we have standards, process, design, and operational aspects covered. We have a large body of information already as we used SharePoint since 2013 to track the platform, issues, and tasks. This also included our Designs, As-Builts, and reporting information. We continued our direction on the VMware platform, undeterred from our goals of supporting the migration and advancing our platforms. We are winding down with our migration, we are looking forward at new potential compute requirements, and some much needed time to focus on each of our platforms and make sure that we set similar standards and enable the same level of insight for each. So I have started to integrate all of platforms we govern, as well as tracking all OS Standards.
  • 9. We finished our study with Turbonomics and got some great collateral that really shows a value specific to us (I was super jazzed until they announced workload placement with vROps) that could really make a real difference of the platform performance and allocation and de-allocation of resources automatically. Embracing my new role, as Team Lead and trying to remain more managerial and less technical. But it is hard to keep the engineer out. I appreciated the coaching from my management team, and feel very comfortable that we share similar goal sets and objectives. So my objectives this year at VMWORLD were two fold, Managerial and Technical Strategy. I wanted to focus on mainstream operations of the SDDC and really focus on the business case for NSX, and applying the other principles from our tooling to enhance our services to our customers and drive service excellence. I want to express my sincere appreciation for the opportunity this year to come, as well as the opportunities to be part of a grander vision!
  • 10. A trip through VMWORLD 2016 As I was sitting in the high energy General Session room, I could not help reflecting on, what in our own organization we have changed the paradigm from deployment of application infrastructures being measured in months, even sometimes years, to days and weeks. Automation will take us through the next steps that will eliminate days of deployment to hours, and eliminate the human factor of Day 0 to Day 2 operationalization through automation and intelligent provisioning of critical application infrastructures and shorten delivery times of infrastructure to seconds and minutes. We had followed the reference designs and have seen a great amount of features within our platform. A VMware Validated Design is a prescriptive blueprint with comprehensive deployment and operational processes mapped out. The advantages to using this is that it is fully vetted by VMware vExperts and Architects working at VMware. The following are the clear reasons why you would chose this methodology: 1. Standardized Designs 2. Proven and Robust 3. Broad Use Cases 4. Comprehensive Documentation 5. Vendor Certified Would we extend this to embrace services from the Public Cloud, actually run some of our infrastructures within the public cloud, and become one of those that truly adopt Hybrid-Cloud? The opportunity made me excited to be here, to be at VMWORLD, to be where tomorrow was possibilities and opportunities, to be in tomorrow… This was starting to feel like we have achieved pace in this marathon and the realization we need to complete the ecosystem, and our SDDC (Private Cloud) would be the success they reference, a best of breed success! Then the moment we all had been waiting for, Pat Gelsinger, CEO of VMware came on the stage.
  • 11. General Session Day 1 – Competitive Advantage in the Multi-Cloud Era Mr. Gelsinger takes to the stage at a brisk walk despite the foot brace on his right leg. He stops center stage and turns towards the audience and says “What a provocative way to start! Which way will you face? The obvious answer is that we will face forward together.” The highlights of the morning session, was that all business is now Digital Business. That there are no traditional businesses left. That all businesses needed data analytics to continue to transform to digital business and innovate in that space. It turns out that only 20% of companies are leaders in the digital transformation and the other 8 out of 10 are struggling to achieve this. The digital age is well under way, and it is as transformative as the industrial age. It can be just as disruptive if we don’t plan for it, integrate the components, and truly embrace the digital transformation. Pat went on to give a timeline of the cloud, and what the adoption rates looked like in the past to the future: 2006: The Cloud Begins, digital transformation begins. 2006: 2% Public Cloud (Salesforce); 0% Private Cloud; and 98% Traditional (29 Million workloads) 2011: 7% Public Cloud; 6% Private Cloud; 87% Traditional IT (80 Million) 2016: 15% Public Cloud; 12% Private Cloud; 73% Traditional IT (160 Million Workloads) Agility – Flexibility – Scalability- Resilience 2021: 50% Mark of Cloud 2021: 30% Public Cloud; 20% Private Cloud; 50% Traditional IT (255 Million Workloads) 2030: More than 50% in the Cloud 2030: 52% Public Cloud; 29% Private Cloud; 19% Traditional IT (596 million workloads) Astounding, this of course is a conglomeration of all type of devices, from Mobile to IOT consuming workloads. The point cloud is not a maybe, it’s a definite and this was decided by predictive analytics. Of course there will be the hold backs, even today some of them don’t want or need their requirements changed, they have long term client lists, and retirement is not that far away. Believe it or not in IT I worked for a boss who would always ask for printouts, so he could by hand analyze them. But the need to analyze never stays still, it is essential to successfully run a platform and avoid 90% or more of the issues, then you have to have sound analytics, and now tactical deployment of actions triggered by those analytics. It is the visibility, and it is a core engine in the fabric of the Software Defined Data Center. One that maybe does not get enough attention, or credit.
  • 12. Then Pat went into the trends of hosting vs. on premise deployments: 2016: Hosting is a 60 billion dollar business with 8.2 Billion devices getting access to services. 2021: Hosting is now a 110 billion dollar business 8.7 Billion devices consuming services Again he pointed out these are active devices and not sensors or other measurement devices. But as cloud takes root IT becomes more cost effective and services become more accessible, placing a heavy onus of the designs of these solutions. And because we all like top ten list, Pat had one for us. Which Vertical Industry has embraced cloud most aggressively? #10 – Construction #09 – Professional Services #08 – Securities and Investments #07 – Insurance #06 – Transportation #05 – Manufacturing #04 – Banking #03 – Resources #02 – Communications #01 – Technology Vendors So insurance is not so bad, we made the list at #7. Compliance presents the largest obstacles, but there some real opportunities for trusted vendor’s like vCloud Air or IBM Cloud to assist us. We are embracing new ways of working, changing the way we work. Changing the way we think. It is really all about freedom vs. control he went on to say, and needing freedom with control because in the END no matter what IT owns security. He stated that in 2016 80% of compute is virtualized industry wide, impressive movement. And that a large amount of that was virtualized on VMware, and that we need to complete our journey to the full SDDC. With NSX and Hyper-Converged Building Blocks. He mention only then would we be ready for hybrid clouds, but said not to worry they had the solution for the point we are mature or risky enough to want to use Public cloud infrastructure. This was the gauntlet thrown down, that we should all be cross-cloud connected. He played a quote from VMWORLD 2014 where Raghu Raghuram, on of VMWare’s Chief Technologists said “Increasingly, all infrastructure components we know of developed and deployed in software. Even more importantly the control of this data center is done entirely by software. The data center is on its way to becoming programmable and automated. We call this the Software Driven Data Center (SDDC)”
  • 13. At its core, innovation is the key, an awareness of what the SDDC truly is, and what can be plugged in, and holistically how did that look. Hundreds of vendors are developing every day solutions for the SDDC. Intel and McAffee are well on their way in their partnership to be the platform security driven defacto to name one, leveraging the opportunities brought to us with NSX. Building blocks standardization was key to the private cloud infrastructures, highly converged and contained units of: Management Automation Compute Storage Network He mentioned VMware’ commitment to helping the customer make this transition, stating VMWare products allows the agility to achieve a lot of scale very quickly. He revealed Cloud Foundation (Data Center Automation – Just add water, and SDDC Manager so you could push down your security and ultimately workloads out to some cloud for some reason. He revealed the real VMware Cloud Foundation was the SDDC and use of their vCloud Air or IBM Public clouds. That the new product SDDC Manager would be able to allow you to evaluate many metrics such as latency cost, etc... To make choices in where to deploy or duplicate a workload. Some of the features were interesting as you could set the policies and to what level you have to be able to automate and control the Data Center on premise or off. And overall continues to support VMware’s Validated Software Driven Data Center Design. The VMware Cloud Foundation and SDDC really brings together several things into a recipe for success: - vSphere 6 - vRealize Orchestration - vCenter 6 - vSAN – Software Defined Storage - NSX – Network Virtualization - SDDC Manager - vRealize Operations - Private Cloud - vRealize Automation - Public Clouds (vCloud, IBM, and other API connectable Clouds) - Security Enhancement and Micro Segmentation of workloads - 3rd party extensibility plugins and products for even more service (Security Infrastructure, Load Balancing, Etc…) I certainly see the value though in the time and effort in putting together solid enterprise solutions that leverage the whole stack in software. Security is they true enabler of cloud adoption. There are 100s of appliances that can plug into this and the more we innovate the more we can do.
  • 14. The VMware Cloud Foundation automates all aspects of the infrastructure deployment lifecycle. - Makes Private Cloud Easy - Enables EASY adoption of Public Cloud Coupled with SDDC Manager you are able to have fully cross-cloud infrastructure capability without compromising expectations of secure and compliant environment. - Solves Paradigm of Freedom vs. Control to Freedom with Control. - Allows us to extend our SDDC to the Public Cloud - Allows us to continue to innovate our business strategies and not be limited to what is available. - When going to either vCloud Air or IBM Clouds you can deploy your SDDC in hours that entirely meets your requirements, security and compliance. Making them ideal partners for Healthcare. While I was listening I traced out what I saw our logical design looking like, I did not add components for the SDDC Manager as we were still unclear what that is, my assumption it would be an appliance and would work great in our primarily appliance heavy VMWare platform. I also did not depict Public Cloud usage as I think we are early in that process to visualize it too much. But it is an area we need more info on, specifically from VMWare and IBM. After listening to Pat I realized that we just needed to add their remaining component, SDDC Manager which would extend our DC into two of our current vendor clouds if needed. Both of which indicated they would work with using them and through our compliance issues.
  • 15. I also realized our agility to stay in pace with the industry and our competition to our ability to rapidly provision services to both our internal and external clients seamlessly, quickly, and within our requirements. The nice thing is that these were both vendors (VMware/IBM) we have deep and long-standing relationships with us, and both value our partnership and both are equally happy to help us along the way to understand any complexity, and help to invest in our success. Several bullets of data were shared: - Is IT Department still needed? o Innovations o Control o Security o Compliance o Legacy Support o Cloud support - There is always a cost, whether private or public and can easily analyze this in SDDC Manager by inputting the costs of our private cluster deployments, then we could see the real ROI achievable. - Security is key to Cloud Adoption - The new way of doing business is changing, the way applications are delivered is now going to be Containers, specialized VMs for Application delivery. - Hybrid Cloud is the solution for new IT - Cross-Cloud Services and Actions o Manage o Secure o Deploy - The SDDC can support any application, on any cloud, to any device. - VMWare Cloud Foundation is a fully integrated and ready SDDC
  • 16. - NSX is a fundamental technology to support all of the security requirements in both the private and public cloud infrastructure o Allows for application of security policies to our workloads whether on premise or public cloud. o Same NSX technology for both. Setup unified security templates to be applied. o NSX Features:  Policy  Firewall  Encryption  Routing  Switching  DHCP  Micros-Segmentation for VM security  East-West Protection General session for the first day had several visitors on stage from the CTO of Johnson and Johnson, VP of IBM to Michael Dell, CEO Dell, Inc. Dell’s message is that they were very excited to be a part of such a vibrant ecosystem and saw the partnership and direction of VMWARE right on track, and most importantly will still be delivered from VMware. The quotes of previous years ran thru my mind, and how like prophesy indeed the Data Center and Industry has evolved down those lines. Good first day General Session, stoked about the day, tomorrow, where we go next!!! Now, back on earth, on to my normal breakouts and the simpler questions like tell me all of the new features of vROps, and the Log Insight integration now that is part of my license. Today would like to see progress towards the ability to analyze, collect and report on both unstructured and structured data in vSphere platform. Tell me about the fling or new way to move my Wintel vCenter to appliance and up to 6.0. More mundane, but the overall vision of the morning is re- assuring we are still on track.
  • 17. General Session Day 2 – Competitive Advantage in the Multi-Cloud Era Admittedly I watched this on Tuesday evening in my room. So I watched this in my room on youTube but wanted to still provide my summary. Day 2 General Session starts with Sanjay Poonen – EUC VP and he begins the session with a discussion on what VMWare is doing in this space with Horizon and AirWatch. The world is going digital, and we are in 4th revolution for humanity, the digital age. He states Any Device, Any Application, Any Cloud is the focus of VMWare. (Sound familiar – Citrix) He was a charismatic speaker and the information on Workspace One was very compelling. As he began talking about everything about their product is Cloud driven. That they valued identity and access to applications from any device and location, the mobile experience. He started talking about the integration with their Mobility Experience Management product AirWatch, and that together with Horizon this made up Workspace One. That they have been very successful at selling this as a service as well through Horizon Air. Offering the remote experience that the new workforce expects. That management of devices and security is managed through AirWatch. Conditional access features protecting company data from being downloaded to un-authorized end points. He introduced Stephanie Buscani Executive VP from Salesforce at this point and she came on stage and told of their experience with Horizon and AirWatch, the Workspace One experience. She of course was very positive and told of several key points they relied on the service. Stephanie went on to talk about not being able to deliver all of their products if it was not for this trusted methodology delivered through VMWare. Again NSX was a critical component for VM micro-segmentation and securing of the infrastructure underlying. She described how Salesforce has a great deal of success in their deployment: 1. Robust, Open, and trusted methodology that is how they deliver all of their applications 2. Salesforce is in a Partnership with VMWare and that is due to a deep trust in the platform. 3. SSO and Security are done entirely through Airwatch 4. Overall simplicity of Security Templates and application of them VMware mentioned they have two options as a service, through Horizon Air and also through the IBM Cloud Partnership. He talked briefly about exciting opportunities to use app volumes to deliver applications, but be able to maintain lifecycle of the single volume. He mentioned that NSX is a critical piece of the puzzle due to micro-segmentation. He also stated that Salesforce was just one of many Fortune 500 Companies that trust them for their services. He stated that the Horizon team is focused on TCO of virtual desktops and are
  • 18. targeting to get the costs of a VDI to under $5900 per year, a clear improved ROI over other competitors. At this point he invited the CEO of TrustPoint one of VMWare’s recent acquirements to come out and discuss end point security. One of the unique differentiators of TrustPoint is that t allows for plain English queries, and searches through the global network to provide back results. It is powered by Tanium which is a leader in the Endpoint Security Space. It was an interesting demo and they proved out they could do a number of searches and remediation right through TrustPoint. The main objective was to discuss its extensibility to AirWatch and the power together that they bring to the VDI space. They wrapped up the EUC discussion with the point that Workspace One allows for all of the management and visibility through a single interface, that users seamlessly are delivered applications securely, to any device, and anywhere. The session was then turned over to the CTO of VMWare, Ray O’Farrell. Ray came out and immediately started to talk about what they hope their relationship will be with their customers. And that by doing this they would help their customers understand how Cloud enables the goals of the enterprise. Consumption can be as great as innovation of your own requirements. Bottom line they would like to continue to be the trusted advisor that we felt comfortable coming to better understand the landscape of the new IT and how to overcome hurdles in our digital transformation.
  • 19. The key challenges are: - Enabling the Organization to make the digital transformation through understanding of the new technologies. - Security for the new Enterprise, understanding of the new security challenges. - Enabling deployment or integration of Cloud Applications - Understanding Containers and how they change the Application delivery model. At this point the CTO of Cloud Platform for VMWare took the stage to talk more about what VMWare is doing to make Cloud easy. He talked about the evolution of the cloud and about the use of Containers becoming more prolific. He also talked about the worries of the enterprise in supporting new models of business, and IT’s perspective on complexity to implement: - Networking - Monitoring - Accounting - Storage - Security - Portability - Repeatable Deployments - Incident/Problem Management - Availability - Backup - Disaster Recovery - Business Continuity He then unveiled VM Integrated Containers as well as VMWare’s unique Container platform Photon. He said that both were equally amazing in the breadth and scope of what they can do, but wanted to focus on VM-Integrated Containers because of the capabilities, and the simplicity for IT to adopt. He flashed back to the VMWare and you slide previously shown, and says the balancing act is making the balance between scalability, Security and Compute, to listen and understand each unique aspect of the business. Helping the Devs and IT to see that there really was no complexity in making this work within the right framework, and VMWare Integrated Containers makes that happen:
  • 20. This technically looks like this in the SDDC as shown in day 1 general session: With the SDDC being portable it means you can utilize hybrid cloud to increase response times geographically, or to reduce costs between the Private and Public Cloud pricing. Ultimately it detaches the application from an OS and using this level of simplicity reduces the surface for issues to occur. It does indeed require a shift in how we as IT look at delivery of applications as well as development of future versions of traditional in house applications to a more web-centric infrastructure.
  • 21. He went on to list the features of Integrated Containers: He wrapped up with the benefits that were tangible for IT: - Containers are just another VM from the Infrastructure Management perspective. - NSX provides heightened levels of security for these VMs - Container Management is completely integrated within the vSphere Web Client, the default administration site fir vSphere - Easy to use with vRealize Automation, and with SDDC portability move to where they are needed - Automatically integrates to vRealize Operations. Full monitoring of the VM and connected infrastructure in vRealize Operations - vRealize Automation is ideal to provision and manage deployment of Containers - Container Management is also embedded in vRealize Automation Console - Adds a new layer of abstraction from the OS - Containers are extensible, and 3rd party product can easily enhance or be added to Containers And lastly before turning to discuss NSX he revealed Photon, with the disclaimer it was not yet fully mature.
  • 22. This is currently under development but another innovation for the SDDC. The topic turned to NSX, Network Virtualization. - The biggest transformation to the traditional network stack - Academic envisioned in 2013 - 1700+ Customers have deployed in 2016 Much like compute and storage, it is critical that the network also makes a paradigm shift from the hardware stack to a software driven service which enables advanced feature sets, capabilities, and services including east-west security, and micro segmentation of the VM ensuring all exposures are remediated and only the purpose of that VM is available to support a service. NSX is really broken up into three focus areas that drive the whole of the technology. Security, Automation and Application Consistent Security. NSX provides strategic security and network services of every level of the stack today, and its overall ability to be entirely integrated into the automation of the SDDC makes it a revolutionary step forward in both the security and network spaces. The average cost of a security breach is +4 million dollars, and immediate impact to Corporate Reputation. The speed of the traditional network components and configurations take days sometimes even months. NSX allows for workloads to be deployed at VM Speed which is in
  • 23. minutes or hours depending on complexity, fully automated and policy driven. With NSX your network and Security of those critical networks are always on. NSX allows for true Application Portability between Data Centers and even Public Cloud infrastructures. NSX is a strategic partner at every layer of the OSI model, introducing new heights of security capability, and network management and provisioning that is unparalleled by its Traditional counterpart. “NSX gives you a secure agile network which is key to the critical operations of your company.” Where do we start to make this happen? Assess Plan Enforce Monitor Need to do a NSX Pre-Assessment Report - There is a free tool called vRealize Network Insight that needs to be run to ascertain current state of the network. NSX needs to be installed at this point so that deeper insight can be gained. - vRealize Network Insight tool can provide what rules need to be deployed to secure the environment Once the plan is developed it can be moved over for enforcement, and this can be deployed all at once or in phases. Need to do the NSX assessment again to establish that all is configured and enforced as expected. - VRealize Network tool is meant to be used ona regular basis to review security of the networks we have workloads deployed.
  • 24. - NSX is transformative and foundational for your network and the security of your network. - NSX analyzes the workloads to determine security for each workload. - NSX provides end to end security of the workloads through micro segmentation, those settings become intrinsic to the workload. - NSX creates workload security that is appropriate, but is also portable and moves with the VM. He wrapped up the discussion on NSX stating that there is no better time than now to embrace it, it is a key component of the SDDC. He passed on the microphone to the VP of vSan at this point. The discussion started with most of the new Hyper Converged Infrastructure (HCI) are being powered by vSan. That customers have had great success with vSan in their environments whether homegrown solutions or packaged solutions like vxRail. vxRail makes an ideal building block she went on to say considering what you get within a 2U form factor, 4 compute nodes, integrated storage, and network. You can use up to 64 nodes per cluster under 6.0 and can purchase in the smaller building blocks or upgrade to vxRack and do it at a larger scale. While traditional storage solutions would still have a place in the data center the storage that powered VM infrastructures would likely be within the HCI due to the reduction of costs and the higher end performance of DAS versus SAN or NAS connected storage. One of the keys to building the Data Centers of tomorrow would be the use of HCI, and standardized building blocks for build, and capacity. - vSan is optimized for flash, in fact on all flash deployments you get Deduplication and compression features that shrinks the overall footprint of the data positively impacting available capacity. Also being directly attached and flash eliminates latency incurred at the fabric, and storage performance is not the bottleneck. - Redundancy and resilience is as good as traditional arrays or better. - vSan is completely integrated in the vSphere Web Client as well as vROps allowing for in depth monitoring, and analysis of trends or impacts. - vSan runs some of the most critical workloads from some of the best known application vendors for a lot of Fortune 500 companies. Application workloads like Oracle, SQL, SAP, Exchange, Etc… - vSan is widely adopted in VDI infrastructures - vSan streamlines deployments and is easily expandable by adding additional compute or storage nodes. - HCI = Hyper Converged Infrastructure
  • 25. She showed a slide with a cloud provider who uses vSan extensively: IBM Cloud has also adopted vSan based HCI, and have stated that they are able to provision storage at 3x the pace, but also see overall improved performance of VDI to as much as 10 times faster in some use cases. She went on to discuss the use Virtual Volumes (vvols) which makes storage very VM-centric and more of a 1:1 relationship. Instead of allocating datastores which will always have some level of waste, you are able to provision storage directly to a VM that grows and shrinks as the VM requires within the bounds of the vSan capacity. vSan is the default storage type in the Integrated Containers reference architecture, and it is the storage behind the VMWare Photon Container platform. There is a preference here for the heightened performance of the vSan that enables trusted performance of the containers delivering application services. She said there was more on the horizon for vSan in the near future, and she listed some of the features they are looking to release in the next version: - Policy based management, introducing intelligent metrics that will change how we can use and prioritize actionable data. - Introducing End-To-End encryption.
  • 26. Lastly she mentioned it is fully integrated with vRealize Automation and that all configuration for vSan is done through the vSphere Web Client. That wrapped up the general sessions for VMWORLD 2016, I felt both were of tremendous value and provided great insight into the now and the future to come.
  • 27. Breakout Sessions Summary The breakout sessions are mostly technical in nature and they provide a micro level of information in most cases. I have listed the Breakout sessions I took and I will summarize the most exciting points in those sessions as potential best practices, feature enhancement, and platform evolution. So for the most part my classes were about SDDC, vRealize Operations Manager, and deep dives on particular components of the stack. It was always my intention when deploying the new colocation facilities that we would drive the SDDC and all of the components so we could realize the promise of the holistically designed Software Driven Data Center. The following are the break-out sessions I attended: Monday 1. SDDC and Hyper-Converged Have Arrived, Get Onboard! (SDDC9035-S) 2. Hyper-Converged Infrastructure at Scale vxRack 1000 SDDC (SDDC9023) – Toured the Solutions Exchange, met with EMC, and discussed vxRail and vxRack, also went by Simplivity and Nutanix to review their offerings. 3. How to Manage Health, Performance, and Capacity of Your Virtualized Data Center Using vSphere with Operations Management [INF8275R] 4. The KISS of vRealize Operations! [MGT7718] 5. VMware Validated Design for SDDC – Operations Architecture Technical Deepdive [SDDC8423] Tuesday 1. vSphere DRS and vRealize Operations: Better Together [INF7825] 2. Deep Dive into Deploying the vRealize Cloud Management Platform the VMware Validated Designs Way! [SDDC8946] 3. Getting the Most out of vMotion: Architecture, Features, Performance and Debugging [INF8644] – Skipped for session with vExpert on Simplified Data Center Management with vROps and vRO 4. VMware Cloud Foundation Backup and Disaster Recovery [SDDC9181] – Skipped for Veeam discussion at Solutions exchange. Wednesday 1. Extreme Performance Series: DRS Performance Deep Dive—Bigger Clusters, Better Balancing, Lower Overhead [INF8959] 2. Extreme Performance Series: Virtualized Big Data Performance Best Practices [VIRT8738] – Skipped for Solutions Exchange vExpert Session on vRops 3. SRM with vRA 7: Automating Disaster Recovery Operations [STO8344] - Skipped for Solutions Exchange vExpert Session on vRops
  • 28. 4. An Architect's Guide to Designing Risk: The VCDX Methodology [INF9048] - Skipped for Solutions Exchange Intel session on their Security Controller and NSX 5. The vCenter Server and Platform Services Controller Guide to the Galaxy [INF8225] 6. An Industry Roadmap: From storage to data management [STO7903] – Jason and I went around the Solutions Exchange, met specifically with Zerto and Simplivity, but spent time with several other vendors as well. Virtual 1. Manager’s Guide to the SDDC 2. Hyper Convergence in Healthcare; The key to doing more with less. 3. Digital Transformation – Technology and Business Futures, a CTO’s perspective. 4. An IT Architect's Guide to the Software-Defined Data Center I immersed myself into many aspects of the Data Center, I even added some Virtual Sessions when I got home, since there were conflicting sessions I could not be at, and it was very cool that this year they made 80% of everything online immediately at www.vmworld.com . If you are a prior Alumni, you can log in and look at any of the years previous including the current VMWORLD 2016 Breakout Sessions. So I of course encourage you to explore that, and if you need any assistance getting access please let me know and I will be happy to help you. Network Virtualization (NSX) was something that weighed heavy on my mind the whole conference, our inability to overcome the political issues to deploy such a STRONG component with so much CAPABILITY. It is worth mentioning that it certainly does not shorten the task list of the Network Administrators as we need to have a robust network maintained to support it, and it is also not a threat as we have mentioned every time we would like to work with them to implement and set them up a network administrator role in VMware that would allow them to continue to manage the network, both physical and virtual. There was so much opportunity that we are missing out on, everywhere I went I heard about what we could be doing with NSX, especially when I visited Intel’s booth this year, they showed me the ecosystem they are recommending for use on NSX… Primarily I will do this section in Bullet point format and will scan in and include diagrams from my notes that I think have a value. I will not be covering each session, as some of them I feel had information that would be redundant to others. So instead I decided to summarize to each topic I want to cover. The Software Driven or Software Defined Data Center (SDDC), vRealize Operations (vRops), and finally vSphere 6. I will likely cover vSphere 6 very lightly since it is not really fundamentally changed in how things work, so will probably just highlight some of the new features. I will also not spend very much time on NSX since I believe I covered it in some detail in the general sessions and some of the comments I have made about my disappointment of it missing from our stack.
  • 29. The Software Defined Data Center (SDDC) “By the end of 2016, every relevant IT organization will have standardized on a Software Defined Data Center approach to IT. The key to creating an agile enterprise. VMware’s task is to help our customers transform to this new model as fast as possible.” –Pat Gelsinger, CEO of VMware, December 2014 “Wake up as a software company” – GE That is because we are all now software companies, we rely on our infrastructures, our applications, and our delivery of digital services to our user, and customers. No longer can we ignore that we have a HUGE investment in software, in fact even our Data Centers are a conglomeration of software, at almost every level, the Software Driven Data Center (SDDC). New IT has cross functional capabilities enabled by the SDDC. The ways we are enabled are: 1. A new level of agility is possible. 2. Cloud and As-A-Service are enablers of the modern digital age companies 3. Deployment times have been decreased from days to hours and minutes 4. We have Application Agility through SDDC portability to any cloud, private or public 5. New building blocks enabled with Hyper-Converged Infrastructures (HCI) We have done this with the adoption of the Software Driven Data Center or SDDC. So what is the SDDC? The SDDC is highly automated, easily managed platform a platform that embraces all applications and delivers them to anywhere. A SDDC is just like it sounds, the Software Version of the Physical Counterpart. The Physical counterpart is decoupled from the SDDC to allow the whole of the data center to be portable. - All infrastructure virtualized - Automated by software - IT delivered as a service - IT perceived as an enabler - IT is able to extend the SDDC to the Cloud or other Data Center - Unparalleled Analytics - Convert from on premise to a partner compliant cloud automatically once executed The Software Defined Data Center (SDDC) has taken us to new heights of capabilities enabling all aspects to have agility, resiliency, portability, and flexible elasticity. Components such as Virtual SAN (vSan) and Network Virtualization (NSX) are bridging the data centers and clouds which in turn stretches the compute capabilities to all of them.
  • 30. It is important to understand what makes up the SDDC, as it has many components, that are all highly integrated and integral to the full potential of the SDDC, I included a couple of key diagrams depicting SDDC 3.0. The SDDC is a VMware Validated Design, which means it has been fully vetted and are robust and proven infrastructure. The goals of the SDDC is to extend virtualization of the whole data center. Several of the rich features make this ecosystem robust and able to handle the most rigorous of workloads with ease. 1. Extend virtual compute to all applications 2. Virtualize the network for speed and efficiency, also to introduce new layers of security that protects the data center 3. Transforms Storage by aligning it with application demands, making storage more personal to the VM or Container-VM. 4. Management tools are giving way to automation 5. Greatly reduces deployment times in the Data Center This year in the first day general session Pat revealed the holistic vision for the SDDC, which is portability of the SDDC and its policies, configurations, security, and services to any Cloud, Private or Public. Removing the barriers for a lot of companies to fully leverage the hybrid cloud model. It is a lot easier to adopt the public cloud when you know you will not be compromising your corporate security and compliance requirements. The announcement of two new tools that
  • 31. would complement the SDDC were also made, VMware Cloud Foundation, and SDDC Manager. VMware Cloud Foundation has the ability to completely deploy a new SDDC Data Center in your Public or Private Cloud. In the public space it is fully supported or “integrated” you might say with VMware vCloud Air and IBM Consumer Cloud, but with API and some tweaking you are able to use Amazon, Azure or Google as well. The overall message is that there are no barriers, if you don’t have the infrastructure go to cloud, if you have it go on premise until it makes more economic sense to go public.
  • 32. Enabling intelligent use of cloud infrastructure through the use of SDDC Manager Tool. Provides reporting on the TCO of each cloud, or private once you enter your costs. SDDC manager simplifies operations and provides a single interface that is easy to consume and make choices from. - Single management platform that brings it all together regardless of location - Automates and simplifies deployments or scale out operations - Rich support for Docker, Containers, and Volumes - Private Cloud and Public Cloud – Hybrid Cloud Manager SDDC Manager brings a rich feature set that really enables you to understand hybrid cloud, and the implications to cost and availability based on your choices and investment in the tool. SDDC and the new tools really brings the holistic vision together, making Hybrid Clouds truly possible because it automates and streamlines the ability to setup services. The whole of your SDDC is portable and allows for easy duplication and movement, scaling, duplication of critical application sets across any of your infrastructure eliminating physical boundaries. Cloud Foundation and SDDC manager streamline your Day 0 to Day 2 operations as well, eliminating the timely process of human interaction to operationalize. It entirely integrates with vRealize Operations for manageability, and analytics as well is ideal for control by vRealize Automation.
  • 33. The crux of the SDDC and the new cross-cloud services is that it is a tightly interwoven and integrated ecosystem built entirely through software decouples critical data centers functions from the physical aspect of the stack. A discussion on SDDC would not be complete without discussing Network Virtualization (NSX), this is because the ability to be truly portable is dependent on NSX being deployed. There is a large adoption of NSX in the last two years with some very significant companies, including insurance and healthcare adopting it. There is a large industry backing for it with lots of 3rd Party enhancements, companies like Intel who has partnered with VMware and McAffee to build out a Heuristic Analytical Architecture that improves the resilience of the network and adds deep inspection as a key value in the security space. Intel’s heart of this is their Security Controller appliance that layers on top of NSX. Companies like IBM have fully adopted the SDDC including NSX to power their enterprise class consumer cloud, as well as VMware is using their own technology to power their cloud. This means that to these cloud’s it is a seamless integration of your data center. The best part because of this ecosystem it is having unlimited capacity on-demand. NSX eliminates bulky physical infrastructure and minimizes the need to use physical firewalls. Traditional networking is a slow an arduous process for most organizations, the amount of firewall rules being supported manually in the enterprise, the manual analysis to ensure there are no conflicts that will prevent the service from functioning properly. NSX breaks down the traditional barriers, but enhances the robustness of your security and network, and shortens the timeframes for deployment. In the SDDC there is a lot of East – West Traffic, and NSX adds a layer of control across East- West boundaries of the platform, truly extending the network and security benefits of the virtual deployments. NSX is what streamlines the portability of VM/APPS, it attaches a Security/Network profile that maintains the workloads critical settings, that allows for the workloads to traverse not only clusters within the same data center but also traverse data centers as the need arises, even into supported public cloud infrastructures.
  • 34. Micro Segmentation of the VMs is no small benefit, NSX creates a virtual network that is independent of the underlying IP network hardware. Administrators can programmatically create, provision, snapshot, delete and restore complex networks all in software. VMware describes micro-segmentation as the ability to “build security into your network’s DNA.” Intel listed in a current solutions brief the 7 benefits of Micro-Segmentation: 1. No Ripping or Replacing - What you have in place VMware NSX runs on top of any network hardware, so you don’t have to buy or replace any appliances. In addition, there’s no disruption to your computer and networking infrastructure or applications. 2. Reduce Escalating Hardware Costs - Deploying more physical appliances to handle the growing volume of workloads inside the data center is cost-prohibitive. Looking at the capital expense alone, VMware NSX is enabling actual enterprise organizations to save 68%1. This savings is based on estimating what physical firewalls would cost if IT administrators tried to approximate the same degree of control that micro-segmentation provides. 3. Curtail Firewall Rule Sprawl - Bloated firewall rules are a real problem in security management. Over the years, administrators can inherit unnecessary and redundant rules, and there’s no easy way to figure out which rules are no longer needed. Firewall rule sprawl can make security audits nightmarish. Out-of-date and conflicting rules can even be an unintended source of security vulnerabilities. With micro-segmentation and VMware NSX, policies are orchestrated centrally and linked to the VMs they protect, so you can automate security policy management throughout the entire data center via a single interface. When a VM is provisioned, moved or deleted, its firewall rules are also added, moved or deleted. 4. Tune-Up Performance with more Efficient Traffic Patterns - With physical networks, workload traffic is often required to traverse more than one network segment to reach routers and firewalls, only to come back to an adjacent workload (an inefficient pattern called hair-pinning). With micro-segmentation, traffic can usually stay in the same virtual network segment, reducing the impact on the physical network. As a result, you eliminate the extra costs and inefficiencies associated with over-subscribing core links. 5. Meet the Individual Needs of LOBs and Departments - Because VMware NSX and micro-segmentation work independently of your physical infrastructure, you gain tremendous flexibility in moving resources around and keeping security in lockstep with change. Because security is handled through software, policies can be created and operational within minutes, eliminating the lag time associated with installing more security hardware or reconfiguring network systems. Figure 1 shows how easily you can update security policies to match the needs of individual LOBs and departments. In this example, the IT department has decided to virtualize the desktops throughout Human Resources (HR). With micro-segmentation, creating and applying the security policies for the virtual desktops for HR takes a matter of minutes. You simply tag all relevant systems “HR” and VMware NSX automatically applies the correct security policies.
  • 35. 6. Add a Valuable New Knowledge Area for your Networking Specialists - Administrators use the same skill sets that they have acquired around VMware virtualization, so major security improvements don’t require a major learning curve. Hardware networking specialists acquire new software skills that keep them at the leading edge of both hardware and software networking security technologies. Developing expertise in the Software-Defined Data Center (SDDC) and network virtualization areas are a tremendous addition to the professional skills of network administrators and architects. 7. Future Proof your Operations - Micro-segmentation makes securing workloads much easier, faster and less expensive. As a result, you can support changes with greater confidence, and even reallocate resources to new project areas. Network virtualization with VMware NSX is also a significant—and non-disruptive— step towards the SDDC model. Which means you’re not only strengthening security today, you’re laying important groundwork for SDDC in the future. Hyper-Converged Infrastructure (HCI) is the integration of the technology stack including compute, storage, and network into a simple building block that is easily consumable within the data center. It is a paradigm shift from the traditional compute, storage, and network silos of the past. But it is an unspoken part of the SDDC, though not necessarily required, the SDDC 3.0 VMware Validated design assumes you are using vSan ready nodes, and products such as vxRail or vxRack in the private cloud. In the public cloud there is no real assumption, but both IBM and VMware vCloud Air are both based on that methodology. HCI offers many benefits over traditional stack: 1. Simplified Build Blocks 2. With Lower TCO 3. Overall Performance 4. Shortened Deployment Cycle 5. Based on vSan solutions are best for SDDC 6. Ability to adjust scale of the building blocks to meet Data Center requirements in less space than traditional. 7. Leverages a robust leaf and spine network topology 8. Continue to do logical segregation of Management, Edge , and Consumer workloads 9. Overall “Ease of Deployment” For the on premise private cloud it is recommended to choose a building block so that you can design the Data Center holistically and increase the ROI as well as consolidate the traditional platforms by replacing them with HCI. One of the most recommended solutions that was brought up in multiple sessions along with a vExpert session on SDDC was the use of vxRail and at scale vxRack. This makes a lot of sense vs. traditional stack as shown in the below:
  • 36. This would be an example of a Single Resource Unit (SRU) and what would be a standardized building block. This particular building block is based on the VCE vxRail 280F which is an all flash unit, and has the maximum configuration available from them. We of course can decide what our standardized building block is by doing our own research and due diligence, for example our SRU might be a Simplivity Unit, or a Cisco Hyperflex, or a self-built unit like we use in our current vSan Clusters. Once you have selected your building block you can begin to visualize what the resources you can accomplish within how much footprint, as you will see in the example below we can achieve a much denser cost effective footprint at Rack scale than we can with our current traditional stacks of Compute, Storage and Network, therefore increasing our ROI and lowering TCO. The comparison to Traditional compute is not even an equal comparison, since we have only compute or only storage in our racks and to achieve the same capacity is close to a 5:1 ratio. Steeply driving our costs up to do the business we need to do. We still appear to be a cost center
  • 37. because managing the traditional stack is expensive and on top of that the pace of deployment in physical is slow and costly as well. On top of their being a revolution in the Data Center because of SDDC, there also is revolution to Application Delivery, and not isolated just to the End-User Compute sector of IT, it impacts application infrastructures with the same challenge to innovate and make a digital transformation. Go from the traditional Application Infrastructures and traditional front end clients in Server/Client compute to Cloud Native Applications. Cloud Native application infrastructures just like EUC are decoupled from the OS, they run in there own space and get their own resources. Good new is that they are still a VM from a management and lifecycle perspective. The benefits of this transformation is identical to the benefits of Hyper-Converged Infrastructures, and this revolution is called Containers: - Simplified Building Blocks - Docker Compatible - With Lower TCO - Overall Performance Benefits - Shortened Deployment Cycle - Overall “Ease of Deployment” - Intelligent Automation - Designed for HCI and evolved for SDDC Ecosystem VMware has two types of containers, one which runs on the current SDDC infrastructure, and one that is platform integrated to the SDDC. VMWare Integrated Containers and Photon Platform Containers. Containers is a representation of compute/storage/network/security from the Application perspective. Its deep integration with the SDDC brings a lot of opportunities on how we deliver, build, design, define, and deliver the application while improving its portability, security, performance, capabilities, and light weight framework. The point is to eliminate the BLOAT, not add to it. There are clear scale and economic benefits at all layers. To understand containers benefits you have to reflect back on the holistic benefits of the Software Defined Data enter ecosystem. At every level containers gains robustness. I would go so far as to say Containers are VMware’s statement of their agility and commitment to providing superior ROI to their customers. Like Pat said day one “…which way will you face? The obvious answers is we will face forward together!” Benefits that are derived from the SDDC for Containers: - Intelligent Automation - Up to 6x to 8x faster deployment cycles by eliminating complex processes around system design, testing, deploying, configuring, or scaling - Repeatable process, for containers these would be part of a pattern part of a workflow delivered through vRealize Automation
  • 38. - Increase admin productivity by up to 2x by automating day 0 through day 2 tasks such as patching, updating, security hardening, and monitoring - It’s (Containers) likely to reduce overall TCO of Application delivery by 30% to 40% by decoupling the OS - Eliminates hardware costs when delivered as a service through cloud infrastructure, develop on private, potentially deploy public while maintain all profile configurations such as security and network requirements - Portability not just inside the Data Center, but easily outside to other Data Centers or Public Cloud infrastructures - Design with objectives and intelligent decision points to fully leverage available capabilities and consume or deliver services So what is a Container according to VMware, a container? vSphere Integrated Containers (VIC) combines the agility and application portability of Docker Linux containers with the industry-leading virtual infrastructure platform, offering hardware isolation advantages along with improved manageability. VIC consists of several different components for managing, executing, and monitoring containers. One of the critical components is the Virtual Containers Host (VCH). The Virtual Container Host (VCH) is the means of controlling, as well as consuming, container services – a Docker API endpoint is exposed for developers to access, and desired ports for client connections are mapped to running containers as required. Each VCH is backed by a vSphere resource pool, delivering compute resources far beyond that of a single VM or even a dedicated physical host. Multiple VCHs can be deployed in an environment, depending on business requirements. For example, to separate resources for development, testing, and production. - VMware Integrated Containers for vSphere benefits: - Automatically integrates to vRealize Operations, giving you the ability to setup Monitoring and Alerts for them, create a dashboard specific to a group of containers, report on the, and advanced Analytics. - Ideal for use with vRealize Automation, for resource provisioning, management, and deployment of containers. - It’s an ideal building block methodology. - Leverage existing tool sets to monitor, manage, and support them - Containers console is fully integrated within the vSphere Web Client - Fully leverage vSan and vVols
  • 39. This has fully been visualized below. The key to the portability of Containers and to realize the full value of them will be the adoption of NSX and the ability to full virtualize the resources used by the container, so it’s not locked in place. The container is a pinnacle of virtualization, taking full advantage of everything virtualization has to offer, what we invest we will get back in the return from containers. The next transformation or migration really is to the Cloud Native Applications, and the standardized building block, and source of delivery will be the container. NSX also enables the ability for Active/Active configurations provides the load balancing features and is also further extensible to 3rd party products to enhance security of our application infrastructures. Some of the full feature sets that really provide value and resilience to containers are categorized below: Container Management Portal for Container administrators to manage the container, repositories, images and hosts. Representing the whole lifecycle of the container. Container Registry Enterprise registry to store Container Images, manage replication, and allows for a role based access to control the access to the company’s critical images. Container Engine Docker API compatible, and deeply integrated into the SDDC and vSphere.
  • 40. There is already rich and diversified portfolio of services available that enhance containers and add functionality, now we have the platform, and it has the ability to be anywhere we want it and do that at any time we want to with no concerns for its security profile, or settings as those move with it. In conclusion, we really need to spend some time in reflection of what digital transformation has meant for us, we need to deepen our commitment to the ecosystem, and overcome obstacles of understanding if we truly want to realize the full value of the Software Defined Data Center. “VMware provides the software platform that enables our customers to consume and deliver applications and services that power their enterprise. Deep levels of integration and capability.” –VMware Our job as IT is now to deliver superior application capabilities, open up new opportunities to improve our business. The ability to use the SDDC to decouple the application from the infrastructure gives a lot of capability to the customer to deliver how and however we would like to. It DOES require a new journey to begin, but the journey has rich rewarrds and ultimately allows IT to be perceived as an innovator to business, not a cost center.
  • 41. vRealize Operations and vLog Insight I have been on the vCenter Operations Manager band wagon since the beginning. The ability to have at you disposal a complete view of the whole the data center that had been virtualized has so many benefits, from real-time monitoring and analysis to resolve an issue, to trending usage and being able to predictively look at different outcomes and size a workload appropriately, to the ability to model the future of the compute and see the impact of choices and how we can proactively ensure capacity and availability of the whole platform holistically. We immediately started taking advantage of the health statistics and being able to rack performance to the real bottleneck. We setup alerts and created some unique views of some of our most critical resources. We worked within the boundaries of only having standard at that time, such as not being able to customize our view to look at multiple servers at once as an application set, but we could pull analytics for each of the servers in the application set and manually review them we could look at very long time frames or narrow down the views to very specific metrics we wanted to analyze. We could to Capacity Modeling and Analysis even though they did not have the ability to apply the reservation to the resources, so you had to do some further manual analysis. But we had between vCenter Operations and vCenter statistics platform native information from Host to VM. We also had several other scripts in PowerCLI and Pearl we used, and could do more traditional analysis as needed for validation, or automate the as-built documentation of a cluster. The next level offered so much more so I put together a presentation of our current licensing and the features we had and their limitations and that the uplifted versions had had much more capability. Through that vessel and several discussions over capabilities we made the decision to go to the next level of licensing that would allow for custom views, super metrics, highly available, and free updates to next versions. This also allowed for greater levels of automation through what at the time was called vCAC or vCenter Automation as well as together integration between the two products and vCenter as well. We also made the decision to go to the next level of support which is Business Critical Support which assigned us a specific Engineer and access to the vExperts and advanced support capabilities VMware offered as a premium service. One of the first task we did with them was go over our whole implementation and our current data with them and talk holistically about our overall goals and our journey to a VMware Validated Design 2.0 deployment. We met with them on all areas of our network and also met their vExperts for each and validated we had done the right thing. We included the whole team as we wanted everyone to feel pride in what they had accomplished and what the next steps were. VCOps changed how we ran the team and all data from VCops was reviewed ata high-level each Data Center each morning. We used the scoring system to represent a report card. We lso ran weekly scripts that entirely documented the infrastructure, vSphere Platform Health and Capacity
  • 42. and performance reports. We went through VCops and we enabled a lot of the reporting at each cluster level that allowed us to capture: 1. Oversized VM Report 2. Undersized VM Report 3. Cluster Health Summary 4. Resource usage reports 5. Capacity Remaining There are several reports more we could select but wanted to stay limited at the time to pertinent data that would have a positive effect on our operations. As I have a statement that any of my team would tell you “If the house is on fire then I don’t care about the add-on bedroom you are working on” All resource methodologically will engage to assist on resolution, and their first sources of data were being powered by VCOps and information analyzed through it. In 2014 VMware made an acquirement and they depreciated the older product vCenter Operations Manager to vRealize Operations Manager and took the best of both worlds from the acquirement product’s best features for virtualization, and their superior engine, and from vCops the look and feel, the customizable views, and all of the platform specific features reports, workload analysis, and default views and combine the to make a “Best in Breed” native tool that was entirely integrated and completely extensible. We of course were immediately sold on it since there was no additional license costs and the upgrade was free to us. It brought much need features and functionality as well as higher levels of integration and analytics but also some fundamental changes in its architecture and the task of moving over the existing data so we did not lose the timeframe of retention six months that introduced some complexity. So we agreed to bring in VMware professional services to collaborate with us on the solution, we are not a sit on the side line organization, so we just needed a good engineer with the specific skill set with the product to assist us in design and deployment of the new solution and integrate it to the SDDC. The upgrade went smoothly the engagement with VMware went perfectly, and we customized several dashboards, and how to fully leverage the different aspects of the tool. We had included training in the engagement, and all of the documentation to support each step and provide good future useable collateral. We are always excited about the next uplift because we know it is going to bring more capability, we are equally excited about the extensibility through management packs to connect to our other platforms we rely on for detail statistics and visibility of the whole stack. This year I was not disappointed they announced some great new features being released with the 6.3 version of vRealize Operations (vROps). So this year we came to VMWORLD to take it further, to understand the path for vROps, the features that were released this year:
  • 43. 1. vLog Insight became part of the vCloud Advanced Licensing, so we have the full version for every host in our environment. It allows for us to use unstructured data to support our engineering and operations on the platform. This adds the remaining layer of integration we had hoped for. So now with vROps we can analyze, alert, monitor, action upon all data available including what is in the logs of the hosts. a. This will help with RCA Analysis and provided a heightened level of insight into the platform, and it will also become the source that host and other component logs will be stored. b. We can write workflows that are triggered by vROps based on fixed metrics and unstructured data, leveraging the close integration of VROps to vRealize Orchestration (vRO) and vRealize Automation (vRA). c. Splunk does not integrate to vROps and is not the native tool on the SDDC for analysis of unstructured data, therefore to remain in line with our commitment to the SDDC 3.0 VMware Validated Design (VVD) we need to use vLI and that they are duplicate featured tools, the difference being that vLI is integrated with VROps and the SDDC, Splunk is not. d. There is a cost benefit that also goes with vLI, we own it as part of our VMware licensing. e. Furthers the holistic goal of the “Self-Healing” data center 2. Automatic Workload Placement has been introduced into the vROps capability set and this allows though tight integration with Distributed Resource Scheduler (DRS) that is part of the vSphere Cluster that ensures the amount of resources committed are available or moves the VM to a host that does. So because of the tight integration with vSphere they are able to deliver the policy driven workload placement technology that fills the gap between DRS at the cluster level and resource availability at the SDDC level. a. Automatic Workload Placement is a manually engaged activity b. VROps closely analyzes benefits and trends to move the best workloads to the less utilized cluster c. Initial placement on the cluster and ongoing resource management is done by DRS and occurs every 5 minuntes (15x20 Second Time Slices), 20+ VM Metrics are checked as well as 5+ Host Metrics to determine where within the cluster is the best placement based on the usage required. d. vROps queries data from vCenter every five minutes, and receives 15x20 Second Interval Time Slices 3. Actionable Events gives the ability on some alerts and warnings there will Action button so that you can take action and use the recommended or self-set setting and reboot if necessary, mainly this applies to resource management of VMs. The action happens immediately so needs to be carefully used and make sure any changes being made have been through proper change management and other appropriate process (VM Uplift) prior to acting.
  • 44. 4. Hardening Guide Automation is another feature they announced, so you can upload the Hardening Guide that you have configured your security stance, and settings to action upon the items within the environment. You could of course use that capability to add custom items as well. But another part of the resilience of the SDDC and another security control in place. 5. Capacity Modeling Reservations which allows you to put in your current capacity requests and the cluster will balance accordingly waiting for the worklosds to become “Real” Of course one of the drawbacks was you could not schedule the action, but through proper alert actions you could automate and schedule the requirement using vRA and vRO. Which there are a lot of the common tasks performed in vSphere already setup as a workflow to action upon in vRO, so there is a lot of opportunity to leverage data from vROps to also do automated actions. But the point is that VROps is the brain of the SDDC with deep integrations with all of the products and cross integrations between other products in the SDDC. With management Packs, and Hyperion you can further integrate your infrastructure and be able to see that data in vROps. As an example IBM AIX has a Management Pack that plugs into VROps and allows you to report on that environment. We also have integrated EMC MPs so we can gain visibility to the underlying arrays and storage. So it was time to take a look under the hood and really understand how it all work. Let’s dive in and really see what the new VROps was about. We had it deployed, and we are using a lot of the capabilities of VROps, like Capacity Management and Modeling, doing platform/workload analysis, waste reclamation, incident response and resolution, and assisting in root cause analysis. We are doing in-depth reporting weekly so we can look at the different aspects of the platform and assisting in make recommendations on health, performance and capacity. We are managing our whole alerting system and NOC Integration for improved response and faster resolutions. But were we fully leveraging all of the capabilities of vROps, and getting the most value from it as we could? I spent some of the time with a vExpert over in the VMware Pavilion in the solutions exchange, I told him starting out I wanted to cover any changes in the infrastructure design of VROps, and then discuss the new features of the product in the 6.3 release. I also had a couple of sessions which focused on this release and some of the features. I wanted a more holistic approach, and having worked on enterprise tools in the past was curious when they would go to a Master / Collector architecture. So recently I had looked at the SDDC Reference Architecture 2.0 and discovered that indeed there was a recommended better approach, a distributed approach with a single pane of glass.
  • 45. So I went to the vExpert and I did not initially reveal to him what I had leaned and referenced on my own, as I wanted to get his input based on how we are currently deployed. I explained that currently we had 2 distinct deployments in HA Pairs, and while this had been a design decision made by our vROps consultant when he came out to do the deployment I have since questioned the logic of the disparity. Though we are able to get data we have to go to the Data Center in question to get data about that Data Center. He immediately explained and white boarded on his whiteboard the distributed approach, with a VROps Cluster at the main site, and collectors at the remaining site, using SRM to insure the VROps master infrastructure to the secondary site. I did not take a picture of the whiteboard but looked like this: As shown in the depiction you would want a Master and Collector methodology for the vROps components, but there is a distributed Master/Worker methodology for Log Insight. SRM could be used to support DR for the VROps core infrastructure, which would also be the strategy for the vRA infrastructure. The underlying software infrastructure of VROps shows a robust layered
  • 46. design, in the SDDC vROps sits front and center, the “Brains” of SDDC: VMWare describes vRealize operations as a general purpose analytics engine which can take inputs from multiple source within the SDDC or through management packs a variety of other platforms and technology and produce meaningful reporting and data from those sources. 1. Storage Management Packs for most leading vendors (HP, EMC, HDS, NetApp) 2. Hyper-converged Infrastructure, especially vSan ready HCI 3. Traditional Hardware through Management Packs or Hyperion (HP, DELL, IBM, Cisco) 4. AIX / Power VM 5. Other 3rd party solutions that have taken advantage of the SDK or APIs to provide an appliance with additional inputs 6. vRealize Automation /vRealize Business are tightly integrated with VROps Having used many different third party tools in the past, I can confidently say that vROps with its extensibility and built-in features out-stripes other tools. We run a lot of scenarios through vROps and consume it on a daily basis to ensure that everything in the Data Center is well protected. It has shortened our times to resolution on issues, and has given us valuable capacity information that we have used to forecast our needs and be able to “Roadmap” our requirements over the last two years. Due to its tight integration to the SDDC and specifically vSphere we have enjoyed un-paralleled view of the enterprise we have realized a real ROI in using this tool in our environment. It is helping us to meet some very key IT challenges utilizing its metric analysis and intuitive reporting: 1. More Control 2. More Agile
  • 47. 3. Data Overload 4. Over-Provisioning 5. Under-Provisioning 6. VM Sprawl 7. Enables the SDDC at every layer Now with its integration with Log Insight and vRealize Business we can take the tool to new levels in our ability to analyze and cost the environment and realize a deeper value. Integration of Log Insight provides the ability to input unstructured data (logs) and do analysis and write rules based on entries in logs to enable further actions and provide more comprehensive visibility into issues, which will further shorten our time to resolve incidents. We will also realize better protection of log information as well. The addition of vRealize Business to our Licensing model, and tight integration with vROps adds business intelligence to our repertoire of capabilities. Enabling us to validate cost models, and drive appropriate cost recovery for services to our customers. VROps also is intelligent and will alert you if sources of data stop collecting data, This will allow you to ensure all sources are accurately reporting, and if not identify the source of the issues quickly. Its extensibility enable many sources of data that can integrated in vROps analytics and reporting. Offering new layers of information, allowing visibility at all layers enabled. Adding in new features like Automatic Workload Placement really drives a deep integration with DRS on the clusters. This integration adds intelligence at the Cluster level, making clusters happy, just like DRS offers at the VM level, making VMs happy. Policy driven and manually enacted imbalances in cluster compute resources can be remediated fairly quickly. VROps has also added a DRS Dashboard where you can see how each cluster is configured and balanced, and make changes to DRS settings right from the dashboard. Workload Utilization shows the graphical map of how the clusters are utilized giving you more insight into what is going on inside the clusters. After configuring Virtual Data Centers or policies in vROps it will be able to action upon cluster workloads when enacted. The policy allows you to also exclude VMs from this activity if you have special requirements for it to remain within a cluster. Also now once you upload the proper hardening guide from VMware appropriately configured you can automatically remediate issues with the security posture of VMs, Clusters, and the whole Data Center. You will receive actionable alerts when there are compliance issues. Now in this version you can not only model capacity for clusters, but also for VMs and have these reservations remain in place and will be used in determining further capacity of the cluster, allowing you to see current utilization as well as computed utilization of the reservations. This
  • 48. will give a true measure of the remaining capabilities ofa cluster allowing you to remediate any constraints. You can also set the date that the reservation will occur so that it will automatically remove it as it has been realized. Within the capacity module you can also explore reclaimable resources prior to adding new resources to a cluster. Allowing you to action upon those items from vROps and manage your resources. VROps continues to sort information into three areas Health, Risk and Efficiency. Health is the immediate issues impacting the data center today that need attention, it also can show under- sized resources for VMs. Risk is the time remaining and forecasted loads, a more predictive component allowing you to proactively address resource shortfalls prior to them becoming real shortfalls impacting the environment. Efficiency are the opportunities to reclaim resources in the environment and to understand if there is waste in the environment that needs reclamation. The extensibility of VROps through management packs opens up new avenues of data collection and enterprise visibility. One of the management packs I learned about (collection of MPs) is Blue Medora MPs. They have a rolled up suite now called TVS, True Visibility Suite which further enhances vROps by adding the following support and reporting:
  • 49. In summary VROps brings a lot to the platform with deep integrations with every layer of the SDDC. It provides a rich feature set that provides unparalleled out of the box (OOB) capabilities but provides superior extensibility to integrate new functionality above and beyond the powerful tools native feature set. We have many times over seen the ROI of vROps and the use of its functionality in our organization. Not only is the tool native but stands apart from other tools because of other reasons: 1. Intelligent and Predictive – Today’s IT Application Infrastructures are complex and run across multiple tiers of software and hardware. vROps understands that and easily adapts to and supports virtualization, hybrid cloud, and complex multi-tiered applications. VROps was designed to be intelligent and predictive able to analyze platform requirements and make intelligent and useful recommendations and actions. 2. Smart Alerting – vROps identifies root cause of issues and notifies administrators to the issue at the same time filtering out unwanted noise (extraneous notifications) and makes recommendations for remediation. The overall benefit is less time spent in problem resolution. 3. Policy Driven Automation – vROps comes OOB with several policies which can be customized to quickly begin realizing value as it makes it easy to automate monitoring and guiding remediation of issues. VROps is immediately able to recognize and integrate to the vSphere infrastructure and begins self-learning due to its anomaly based monitoring.
  • 50. 4. Enterprise Vision and Comprehensive Execution Capabilities – Tool proliferation is often due to the closed nature of tools and being only able to monitor and act upon a single focus area. With vROps on of the greatest differentiators is that it supports a very extensible platform that has allowed for a rich 3rd party Management Pack (MPs) Ecosystem. By implementing other MPs vROps can capture and analyze data from Application all the way down to the Storage and Network layers. Once data is analyzed it is presented via the unified dashboard. 5. Enforce Compliance – With its ability now to receive hardening guide upload it ensures that from a security view that the Hosts and Servers running are as hardened and secure as possible. Issues arising that need remediation are brought up to administrators for remediation. 6. No Hidden Costs – We get vROps with our licensing, and each licensing level has a clear set of capabilities. For example management packs cannot be used in the standard version, this is made clear, the ones available for Advanced/Enterprise can easily be found in the marketplace, several are free but if there is a cost it is clearly notated. OOB vROps has unparalleled ability to monitor, report, and maintain visibility into your vSphere environment.