You have an application, you deploy servers to run it and you deploy storage to store the data. You have another application, another set of servers, another set of storage, etc.
Typically these silos are independent of each other, many times there are separate choices made about servers and storage between the different silos. All sorts of defense mechanisms come up such as approved vendor lists (SAN vendors, HIGH END TIER TWO, etc.)
If you wanted to roll out a new application, step number one is to roll out new hardware and get that out, and it would take months to put new things into production.
It was hard to share resources. You had resources that were stranded and resources that weren\'t growing while you\'re running out of resources on an application segue.
Zones of Virtualization
Next came server virtualization. Server virtualization had a compelling value proposition, and a profound implication. The value proposition was a simple: most of my servers are underutilized; if I could run multiple apps on the same server, I could reduce my server footprint, drive up my utilization, save a lot of money. Some examples: British Telecom, went from 3,000 serves down to 100-and-something blades. Citibank, when they had no money to spend, they still rolled out the VMware worldwide and put a petabyte of data storage behind it because the savings to the company were compelling.
Virtualization allowed a decoupling of the application from the hardware. Now applications are mobile. Applications can move from server to server for load balancing, they can move from data center to data center for disaster recovery, they can move into the cloud and back out of the cloud for capacity planning, flexibility, and cost.
Virtualization enabled the ability to decouple applications from servers, build a broad, homogeneous, horizontal server infrastructure that\'s capable of running multiple applications simultaneously.
You no longer have to deploy hardware in order to deploy a new application. You can flow the resources to the demand. You have tremendous flexibility to move your applications around, and you can drive a degree of standardization.
Shared Storage Infrastructure
The same is true of storage. NetApp was early to recognize that this has tremendous implications for storage as well. Just like customers who want to build a broad horizontal infrastructure running multiple applications for servers, they want to do the exact same thing for storage as well.
If you look at the evolution of this chart, we saw the silo model giving way to the virtualization model. This broader model of basically having applications, many applications running on the same infrastructure, that’s optimized for flexibility and speed and scale, is a will-call to shared infrastructure. There are other names: virtual data center, dynamic data center, virtual dynamic data center, internal cloud, whatever you want to call it. We’ll use shared infrastructure
Now, all of these models I think will all exist, but the bottom line is that the application-based silos are ultimately going to get relegated to legacy applications that nobody wants to migrate or to the small set of key applications in the data center that people believe still warrant their own dedicated infrastructure. The vast majority of the storage and the vast majority of the applications are going to move to the shared infrastructure over time.
Our goal, and where we’re trying to go in the market, is to be the platform choice for the shared infrastructure. That is the design point on Data ONTAP 8.
This has implications. Just like when virtualization came along, that had implications for the market, and it drew a whole different set of purchasing criteria.
Storage Efficiency Customer Success Stories Sept 2010 power point
NetApp Fas6200 Fas3200 Customer Presentation
1. FAS6200, V6200,FAS3200, and V3200 Customer Presentation Michael Hudak NetApp Sales Specialist Champion Solutions Group mhudak@championsg.com 800-771-7000 x344
2. 2 Moving Toward a Shared IT Infrastructure ExternalCloud Services Application-Based Silos Zones ofVirtualization Internal Cloud Management Apps Servers Network Storage Flexible and Efficient Shared IT Infrastructure Traditional Approach Support Multiple Workloads and Customer Groups from a Single IT Infrastructure
9. 5 V-Series Open Storage ControllersNew V6200 and V3200 Models V-Series builds on your current storage investment to satisfy unmet needs V62802,880TB V62402,880TB V62102,400TB V32701,920TB V3240 1,200TB V3210 480TB Supporting Disk Arrays from Major Storage Vendors
16. Service automationSATA/Flash Cache RAID-DP® $ Thin Provisioning Snapshot™ Technology Deduplication Thin Replication Virtual Copies TB Enables You to Do More with Less $ 7
21. New FAS6200 and V6200 FamilyDesigned for Large-Scale Shared IT Infrastructure 9 FAS6280 FAS6240 FAS6210 High performance for demanding workloads Double the performance of current FAS systems Ongoing performance gains with Data ONTAP® 8 Future-ready scalability and flexibility Up to 3PB capacity, plus 2x more PCIe connectivity Built-in 10Gb Ethernet, 8Gb FC, and 6Gb SAS Enhanced enterprise-class availability Service processor and alternate control path (ACP)
27. Designed-In Enterprise-Class HA Proven data availability of NetApp® storage infrastructure Better than five nines availability Demonstrated broadly in real customer environments Validated by industry analyst firm New enterprise-class availability features Lights-out management with new service processor Nondisruptive recovery through storage alternate control path (ACP) Continuous data availability for mission-critical applications MetroCluster™ designed for zero planned and unplanned downtime 12
28. New FAS3200 and V3200 FamilyPerfect Building Block for Shared IT Infrastructure The best value for mixed workloads Future-ready flexibility and scalability 50% more PCIe connectivity Up to 2PB of storage capacity Unified architecture + Data ONTAP® 8 = leading storage efficiency 13 FAS3270 FAS3240 FAS3210
32. Suitable for both enterprise and MSE customersFAS3240 Mixed workload with the best price/performance FAS3210 Target MSE and Windows storage consolidation 14 14
59. Key Takeaway New systems offer greater performance, capacity, and expandability Continuing the unified storage architecture, new systems leverage Data ONTAP® 8 Simplified software structure delivers even more value 18
Goal of Slide: Talk through different environments that leverage a shared storage infrastructure. What's dominant in our discussions with customers today is the application silo model on the left hand side. This has been the primary provisioning model for servers and storage, all up until the last few years. Application Based Silos You have an application, you deploy servers to run it and you deploy storage to store the data. You have another application, another set of servers, another set of storage, etc. Typically these silos are independent of each other, many times there are separate choices made about servers and storage between the different silos. All sorts of defense mechanisms come up such as approved vendor lists (SAN vendors, HIGH END TIER TWO, etc.) If you wanted to roll out a new application, step number one is to roll out new hardware and get that out, and it would take months to put new things into production. It was hard to share resources. You had resources that were stranded and resources that weren't growing while you're running out of resources on an application segue.Zones of Virtualization Next came server virtualization. Server virtualization had a compelling value proposition, and a profound implication. The value proposition was a simple: most of my servers are underutilized; if I could run multiple apps on the same server, I could reduce my server footprint, drive up my utilization, save a lot of money. Some examples: British Telecom, went from 3,000 serves down to 100-and-something blades. Citibank, when they had no money to spend, they still rolled out the VMware worldwide and put a petabyte of data storage behind it because the savings to the company were compelling. Virtualization allowed a decoupling of the application from the hardware. Now applications are mobile. Applications can move from server to server for load balancing, they can move from data center to data center for disaster recovery, they can move into the cloud and back out of the cloud for capacity planning, flexibility, and cost. Virtualization enabled the ability to decouple applications from servers, build a broad, homogeneous, horizontal server infrastructure that's capable of running multiple applications simultaneously. You no longer have to deploy hardware in order to deploy a new application. You can flow the resources to the demand. You have tremendous flexibility to move your applications around, and you can drive a degree of standardization. Shared Storage InfrastructureThe same is true of storage. NetApp was early to recognize that this has tremendous implications for storage as well. Just like customers who want to build a broad horizontal infrastructure running multiple applications for servers, they want to do the exact same thing for storage as well. If you look at the evolution of this chart, we saw the silo model giving way to the virtualization model. This broader model of basically having applications, many applications running on the same infrastructure, that’s optimized for flexibility and speed and scale, is a will-call to shared infrastructure. There are other names: virtual data center, dynamic data center, virtual dynamic data center, internal cloud, whatever you want to call it. We’ll use shared infrastructure Now, all of these models I think will all exist, but the bottom line is that the application-based silos are ultimately going to get relegated to legacy applications that nobody wants to migrate or to the small set of key applications in the data center that people believe still warrant their own dedicated infrastructure. The vast majority of the storage and the vast majority of the applications are going to move to the shared infrastructure over time. Our goal, and where we’re trying to go in the market, is to be the platform choice for the shared infrastructure. That is the design point on Data ONTAP 8. This has implications. Just like when virtualization came along, that had implications for the market, and it drew a whole different set of purchasing criteria.
Goal of Slide: To introduce the 8 Criteria for Shared IT infrastructureKey Points:Introduce the criteria in three broad buckets; Criteria like scalability (both scale up, bigger boxes and scale out) or storage efficiency = they’re standard criteria. They’re table stakes as customers move to shared infrastructureThen there are criteria that were nice to have before but have become must haves in Shared Infrastructure: Intelligent caching (the ability to have dynamic performance optimization) and Unified architecture (you can’t build a shared infrastructure without Unified architecture), and integrated data protection (where a lot of the data protection is built into the storage layer and offloaded from the servers)Finally there is a set of new criteria which are unique to shared infrastructure. So, by definition, shared infrastructure are multi-tenant and multi-workload.Security and secure multi-tenancy becomes a criteria that applies to shared infrastructure.Similarly service automation, so that you can do one click provisioning or do non-human provisioning (automated provisioning), that becomes a requirement of shared infrastructure.Shared Infrastructure need to perform non-stop operations so that physical disruptions, capacity, load balancing, performance balancing, etc don’t require an application level disruption. In a shared infrastructure you cannot predict which application needs to be disrupted by what, so a non-stop operation becomes a requirement.
Goal of Slide: The NetApp Systems Portfolio is truly unifiedKey Points: Unified storage architecture is much more than support for multiple protocols on a single storage array. In most environments of scale, it is uncommon to run multiple protocols on the same box. The real benefits of unified storage are at an architecture level, not at a box level. The big question is how to achieve the lowest cost profile while meeting the SLAs for a particular workload or mix of workloads. For example: Why buy more than you need? The ability to grow and scale from low-end to high-end systems on the same architecture means that you don't have to take a "rip-and-replace' approach to one of the most costly parts of your IT operations - the process and skills sets required to deliver IT services to your users. How can NetApp help you benefit from our IT efficiencies if you already have an investment in a different storage infrastructure? Our ability to virtualize existing SAN systems with V-Series enables you to achieve the benefits of standardization, data protection, and storage efficiency even if you are currently running EMC, HDS, or HP storage systems.How can you achieve different cost-performance profiles in the same architecture? NetApp enables what some people refer to as "tierless storage" through the use of flash assist technologies or caching techniques to achieve high performance with low-costs drives. A unified architecture means that you don't need to take a rip-and-replace approach when you need more I/O or, more likely, a mix of I/O and cost profiles for different applications and storage needs.Standardization helps to drive cost reduction: If you have fewer architectures, you can be more efficient and more flexible. You can increase storage utilization by using a single architecture rather than a multi-array approach that requires you to break it up into smaller pieces. The ability to handle multiple workloads and deploy multiple technology options across a single architecture provides you with the flexibility to deal with change. Whatever storage requirements you may have today, they will likely change again in the next 12 - 18 months.These are all varying aspects of unified architecture. If you can deliver a unified set of tools, a unified set of processes, the same way of doing disaster recovery and backup and provisioning and management and maintenance, then you begin to see massive benefits in terms of complexity reduction. Complexity reduction quickly translates into cost reduction.
Goal of Slide: Share NetApp storage efficiency advantages IT efficiency is not simply technology and is not based on a particular feature. IT efficiency is a way of thinking,For example: How to get more out of low-cost components like SATA drives by using Flash Cache to drive acceleration on top of SATA drives. How to pack more and more bricks into the same physical containers for techniques like data deduplication, compression, etc. How to reduce the number of copies by creating snapshots and as many virtual copy instances as we can.Storage efficiency is an ongoing agenda for NetApp. We’ll continue to provide enhancements in this area, increasing both the scale in which storage efficiency applies as well as the number of use cases.Operational Efficiency:But probably an even more important element of optimization is reducing the number of people required to manage a given size of infrastructure: reducing the number of full-time administrators it takes to manage more and more terabytes. This is where service automation becomes really critical. How do we make sure that we can do end to end, one button provisioning in a completely lights-out data center? Increasingly, more and more of our customers are beginning to ask for that. When you are building data centers that require storage of 10, 20, 30, 50 petabyte in a single data center, you cannot succeed by simply adding more and more manpower to handle provisioning and the management of these environments. Service automation becomes a requirement.Our goal is to provide application integration that reduces the separation between an application administrator’s job and a storage administrator’s job and includes automating data protection as an integrated element of the data management. Very rapidly we’re going to get to a point where there are many tasks that no longer need to rely on an administrator. These are all really driving toward increasing the number of terabytes that can be managed by a single FTE.Some of these can be done with technology changes, but many of these require process changes as well. How can you change the test and development processes in order to take advantage of virtual cloning capabilities to accelerate QA? That’s a process change, and IT organizations have to be committed to making those types of process changes in order to continue to drive efficiency in the datacenter. If you don’t make those changes, there is going to be a cost gap between what a service provider can offer and what the internal IT organizations can offer. Increasingly, benchmarks will be developed that drive economic decisions about the best way to deploy and manage applications.
Goal of Slide: Highlight the differences between the current software structure on current systems and the new software structure on the new FAS3200 & FAS6200 systems. Key Points:With the new FAS3200 and FAS6200 systems, we’re rolling out a new software structure that delivers more value and simplifies system configurations.Currently, midrange & high-end systems have some software included in the base and then have an a-la-carte menu for adding on 30+ software products. In addition, iSCSI protocol is included with the system while other protocols, if needed, must be purchased separately.The new systems have a simplified software structure. 3 key features:More value is now standard with each systemEnhanced flexibility for customer to decide which protocol they want include, for free, in their system purchase.Add on software has been simplified down to 6 key products. Or buy them all with the Complete bundle. And, option to buy any additional protocols that are needed.
Goal of Slide: Highlight the value of Data ONTAP EssentialsKey Points:Data ONTAP Essentials included with each new FAS6200 and FAS3200 system.Includes more value-added and highly differentiated NetApp software for $0 charge with every FAS3200 & FAS6200 storage system shipped.Data ONTAP now includes:MultiStore - our secure multi-tenancy software for shared storage infrastructures, ideal for cloud environmentsMetroCluster - our automated, continuous data availability software for mission-critical applicationsCore Storage Management software – centrally monitor, manage, provision and protect all your data with policiesOne protocol of choice – offering our customers true unified storage architecture for lessProtocols:One protocol of choice is included in Data ONTAP Essentials for $0 charge. Additional protocols are now all at the same price. This is good news for NAS customers, as NetApp used to charge for both CIFS and NFS. Now, one protocol is included and the customer is only charged for one protocol, not two. NetApp remains highly competitive in SAN and unified environments too.Add-on Software:SnapVault – includes both SnapVault Primary and SnapVault Secondary licenses. Customers can easily perform disk-to-disk backups of data within the same system using tiered storage or perform remote backups, eliminating the need for tape.SnapManager Suite – includes all SnapManagers and SnapDrives in one, convenient software product. Customers virtualizing their server infrastructure and running multiple applications on shared storage, now only need to purchase one software product for integration and enhanced data protection for their applications.SnapRestore – Restore LUNs and file systems from previously backed-up snapshot copies in seconds, irrespective of size or number of files.SnapMirror – Synchronous, semi-synchronous, and asynchronous data replication software that delivers simple, efficient, and flexible disaster recovery for business-critical applications.FlexClone – Instantly create transparent virtual copies of production databases or virtual machines, without needing any additional storage capacity or compromising performance.Complete Bundle – Includes all protocols and all extended-value software in one, all-inclusive, convenient software bundle at a highly competitive price.
Goal of Slide: Highlight the key takeaways for the audience Key Points/Messages:New systems offer greater performance, capacity, and expandabilityNetApp offers the only truly unified portfolio. We’re continuing our unified storage architecture. The new systems deliver industry-leading storage efficiency plus leverage Data ONTAP® 8 and all its advantagesSimplified software structure: now deliveringeven more value