【注意】利用許可 2016/5/31 まで (資料発表から1年間)。営業資料、セミナー資料として使用可能。印刷可。
Web や紙媒体などに載せる場合は別途承認取得が必要。
WE NEED A NEW APPROACH TO THE NETWORK
The Software Defined Data Center takes a different view of the data center entirely to provide a fundamentally more secure, more agile data center. NSX is the foundation for that software defined data center.
A typical data center consists of physical network, storage, and compute assets. With an SDDC approach, these components remain in place without the need to supply new hardware to the data center.
*Click
On top of the physical infrastructure today, we have a virtualization layer that consists of the hypervisor.
If we examine this hypervisor layer a bit more closely, we see that there exists networking functionality that is built into the virtual switch.
What we’ve done with NSX is actually taken the traditional functions of hardware networking (switching, routing, load balancing, and firewalling) and embedded them into the hypervisor layer itself.
By embedding these functions into the hypervisor, we are able to take advantage of some significant benefits that allow you to rethink the data center infrastructure itself. With this approach, we are able to realize very high throughput rates and secure east-west traffic between every single virtual machine. This is done natively to the platform by embedding these functions in every hypervisor in the data center.
By distributing this embedded model to every hypervisor, we create a building block for the data center that now adds additional networking and security capacity every time a new host is added. This platform allows us to create what can essentially be thought of as a network hypervisor, an abstraction layer that sits between the underlying physical infrastructure and …
*Click
virtual networks above. Each of these virtualized networks are able to act in complete isolation from one another, each with their own virtualized compute and virtualized storage. Because each of these networks is isolated by default, there is no way for them to communicate with one another unless they are explicitly permitted to do so.
INHERENTLY MORE SECURE FOUNDATION
This ability to create entire networks fully in software brings with it some new implications in how security can be applied. First, as mentioned before, the virtual networks are isolated from one another by default. This means that there is no risk of unrelated streams of data communicating with each other. Secondly, by embedding security functions into the hypervisor, we are effectively able to provide firewalling capability at every virtual machine all the way down to the individual virtual network interface – a level of fine-grained security that has previously been unreachable and is not possible through legacy physical appliances.
This enables us to limit the spread of threats inside the data center in a way that is not otherwise possible. This is what we mean when we use the term “micro-segmentation”: the use of fine-grained policies and network controls to enable security inside the data center, preventing the lateral spread of threats once they have overcome perimeter defenses.
Why virtualize your networks.
For many – it is simply about Speed….they must provision application faster, deliver back to the business in agile ways. Nexon and eBay were great examples.
For others, it is all about efficiency…transforming the economics of the datacenter…the USDA and Citi were great examples.
Third reason has to do with security.
IT automating IT
Faster project on boarding Elastic Services
Streamline Security Enforcement
Mergers & Acquisition
Developer cloud
Leverage vSphere investment
Faster application development
Brings power of cloud on-prem
Multi-tenant infrastructure
Robust security to isolate each tenant organization
Multi-tenancy for legacy apps
IT automating IT
Faster project on boarding Elastic Services
Streamline Security Enforcement
Mergers & Acquisition
Developer cloud
Leverage vSphere investment
Faster application development
Brings power of cloud on-prem
Multi-tenant infrastructure
Robust security to isolate each tenant organization
Multi-tenancy for legacy apps
NSX just provisioned everything inside that large green box.
This entire virtual network - with its L2 through L7 services – is deployed in software on top of the existing physical network.
And IT didn’t need to touch the physical network. No creating VLANs, no configuring routing, ACLs, or firewall rules.
Inside this construct we have three layer 2 switches along the left side.
They’re connected by a layer 3 router, so they become layer 3 subnets containing a web tier, app tier, and database tier.
Along with the L2 and L3 services, each one of the four VMs has a security profile associated with it. So they have firewall rules attached to them.
The web tier can talk to the app tier, the app tier can talk to the database tier, but the web tier can’t talk to the database tier, for example.
And talking with the layer 3 router on the right there I am using NAT to get to the outside world, but I could be using static routes or dynamic routes, or whatever.
Now, for most enterprise companies this takes 3 weeks to a month to provision. With NSX, this entire environment can be provisioned in 30 seconds.
And it’s ready to use. Without having to touch the physical network.
So this is some pretty powerful stuff that you can do in terms of speed and agility.
Take network configuration and services
Replicate to a 2nd site
Failover if first site is down
Possible because now in software
Finally, a true platform requires successful participation of a third-party ecosystem. NSX has developed a RICH ecosystem of partners that span across physical to virtual, operations and visibility, app delivery services, and security services categories. This extensible, distributed service platform supports the novel concept of dynamic service chain that provides multiple platform integration points and automates the deployment, orchestration, and scale-out of partner services.
Virtual SAN is VMware’s software-defined storage solution, built from the ground up for vSphere virtual machines. It abstracts and aggregates locally attached disks in a vSphere cluster to create a storage solution that can be provisioned and managed from vCenter and the vSphere Web Client.
VSAN is hypervisor-converged – that is – storage and compute for VMs are delivered from the same x86 server platform running the hypervisor. It integrates with the entire VMware stack, including features like vMotion, HA, DRS etc. VM storage provisioning and day-to-day management of storage SLAs can be all be controlled through VM-level policies that can be set and modified on-the-fly.
VSAN delivers enterprise-class features, scale and performance, making it the ideal storage platform for VMs.
So how does this virtual SAN work?
It’s a distributed object store that’s implemented directly in the hypervisor and integrated with our management solutions with SPBM, Storage Policy Based Management.
Virtual SAN provides hypervisor-converged infrastructure that combines compute and storage into one platform.
It uses flash for performance optimization and caching and it uses a distributed algorithm to ensure reliability and data protection.
And it offers customers better scalability and performance at lower price points.
And it’s incredibly simple.
Virtual SAN is built from the ground up for vSphere environments and its core value proposition revolves comes from the following key strengths:
Radically Simple
It is embedded in the VM kernel and does not need to be installed like a storage appliance. Just 2 clicks and its enabled!
It uses storage policies to assign storage services to specific VMs. It then automatically tunes and rebalances storage to ensure that the VM storage SLAs stay compliant with the policies throughout the lifecycle of the VM.
It is managed through the same web interface as the rest of your vSphere environment. This makes it easy for even the VI Admin to manage storage and eliminates the need for specialized skillsets.
It is completed integrated into VMware stack, and works seamlessly with other vSphere features and VMware products.
High Performance
VSAN 6.0 uses a new flash architecture for caching and data persistence. This provides the ability to get high IOPS with consistently low latencies suitable for business-critical or transaction processing applications that require consistent response times
It is embedded in ESXi kernel and therefore optimizes the data I/O path better than other technologies that need a storage virtual appliance
One of VSAN’s core advantages is its ability to scale performance and capacity in a linear and predictable manner: both scale-out or scale-up by adding flash, magnetic disks or hosts as needed
Lowers TCO by as much as 50%
VSAN has a hardware independent architecture which can utilize cheaper server-side, industry-standard components to reduce storage capex; the scaling and purchasing in chunks allows the flexibility to change vendors or hardware over time, ensuring you use the latest available on the market
Grow-as-you-go scaling allows investments to be spread over time in a more cost-effective manner
Last but not least, its VMware integration, policy-driven control and automation make it operationally efficient, saving even more dollars in the long run
VMware’s Storage Policy Based Management framework allows you to define storage requirements on a per-VM basis, based on the needs of the applications running in the VMs.
Simply define the amount of capacity, performance and availability for each VM
VSAN then matches those defined requirements to underlying storage infrastructure.
Unlike traditional external storage, where provisioning is done at storage array layer, in a more rigid-hardware-centric way… Virtual SAN puts the application in charge, and allows you to provision and control storage in an application-centric way
Virtual SAN software abstracts underlying hardware and automates ongoing management of the storage SLAs assigned to VMs
End result => No more LUNs of Volumes…
This simplifies storage management…. And as you will see, saves you time and money.
Virtual SAN enables both hybrid and all-flash architectures.
Irrespective of the architecture, there is a flash-based caching tier which can be configured out of flash devices like SSDs, PCIe cards, NVMe etc. The flash caching tier acts as the read cache/write buffer that dramatically improves the performance of storage operations.
In All-Flash architecture, the flash-based caching tier is intelligently used as a write-buffer only, while another set of SSDs forms the persistence tier to store data. Since this architecture utilizes only flash devices, it delivers extremely high IOPs of up to 100K per host, with predictable low latencies. Reads will primarily come from the capacity tier, although if data was freshly written is “hot” and not de-staged it may read from the write cache tier.
In the hybrid architecture, server-attached magnetic disks are pooled to create a distributed shared datastore, that persists the data. In this type of architecture, you can get up to 40K IOPs per server host.
All Flash Only.
“High level description”
Dedupe and compression happens during destaging from the caching tier to the capacity tier. You enable “space efficiency” on a cluster level and deduplication happens on a per disk group basis. Bigger disk groups will result in a higher deduplication ratio. After the blocks are deduplicated they will be compressed. A significant saving already, combined with deduplication and the results achieved can be up to 5x space reduction, of course fully dependent on the workload and type of VMs.
“Lower level description”
Compression (LZ4) would be performed during destaging from the caching tier to the capacity tier. 4KB is the block size for deduplication. For each unique 4k block compression would be performed and if the output block size is less than or equal to 2KB, a compressed block would be saved in place of the 4K block. If the output block size is greater than 2KB, the block would be written uncompressed and tracked as such. The reason is to avoid block alignment issues, as well as reduce the CPU hit for decompressing the data which is greater than compression for data with low compression ratios. All of this data reduction is after the write acknoledgement. 15% at most less than 5% greater resource usage.
Deduplication domains are within each disk group. This avoids needing a global lookup table (significant resource overhead), and allows us to put those resources towards tracking a smaller and more meaningful block size. We purposefully avoid dedupe of “write hot data” In the cache, or decompressing uncompressible data significant CPU/memory resources can avoid being wasted.
Note: Feature is supported with stretch clusters, ROBO edition
Stretched storage with Virtual SAN will allow you to split the Virtual SAN cluster across 2 sites, so that if a site fails, you would be able to seamlessly failover to the other site without any loss of data. Virtual SAN in a stretched storage deployment will accomplish this by synchronously mirror data across the 2 sites. The failover will be initiated by a witness VM that resides in a central place, accessible by both sites.
We envision a one-cloud architecture that is scalable, reliable, and secure to support any application. And ultimately, users can access these applications with any device in a very secure way. This architecture, as we say, allows you to “Innovate a start-up, yet deliver like an enterprise.”
Let’s look at a few examples of how our customers today are working with VMware to leverage this One Cloud, Any Application, Any Device architecture to transform not only their companies, but actually to transform their industries.
Let’s start by talking about an example of how VMware is working with an automotive manufacturer who can now not only monitor, but actually tune cars remotely and do it all in real time.