21. SAN vs iSCSI Stacks copyright I/O Continuity Group, LLC SAN is fast and robust with minimal network overhead. iSCSI is “bloated” protocol with high network overhead.
22.
23.
24.
25.
26.
27. Traditional Direct-attached Hosts External SCSI Storage Array Parallel SCSI connection LAN Each server is separately attached to a dedicated SCSI storage array requiring high storage maintenance with difficult scalability and provisioning. Different vendor platforms cannot share the same external array. copyright I/O Continuity Group, LLC
28. FC SAN- attached Hosts FC Storage Array FC Switches 200 to 400 MB/s Tape Library Servers with NICs and FC HBA’s LAN FC SAN’s offer a dedicated block-level infrastructure independent of the LAN. Brocade copyright I/O Continuity Group, LLC
29.
30.
31. Physical Servers represent the Before illustration. “ Converter” migrates the physical machines over to Virtual Machines running on ESX in the After illustration. copyright I/O Continuity Group, LLC
32.
33.
34. SAN + Virtualized Hosts FC Storage Array FC Switches 200 to 400 MB/s Tape Library LAN FC SAN’s offer a dedicated block-level infrastructure independent of the LAN. Brocade copyright I/O Continuity Group, LLC 2 to 20 VM’s on each server Virtualization server Virtualization server Virtualization server
Thank you Doug. We are very happy to have all of you joining us this today for this short technology presentation. We are now in a cycle where there is increasing pressure to do more with less. Once you have a better understanding about how virtualization is deployed, you should be able to start making more informed decisions about planning for adoption. This cycle is accelerating the adoption of virtualization.
It is becoming less important where your data actually resides with the emergence of virtualization and computing clouds. Virtualization can cut across physical boundaries through centralized and automated management. Currently the Fortune 100 are virtualizing about 10% of all their servers. The prediction is that in the next 8-10 years, 100% of their servers will be virtualized.
We have a very mixed audience with us today, but we hope to focus on the end user’s objectives. I am assuming that most people know little or nothing about this topic and hope it is mildly riveting. For the vendors and technical members of the audience, I invite you to ask your questions during the break or send me an email for more information. You can also sign up to subscribe to my newsletter.
I am going to endeavor to provide some quick snapshots and crisp answers describing the salient features of this new emerging technology with a minimum of techno-babble . I am in my 9 th year of explaining SAN mostly to technical audiences, which suggests that Storage Area Networks are here to stay, at least for the forseeable future. Not only are they the de facto architecture for managing your datacenter, the cost has come down dramatically in those 8 years, since it is now reaching critical mass proportions. The most common customer comment I heard was, “someone came in and set up our SAN two years ago and I haven’t touched it since”. It seems to runs by itself. Now they’re 80% cheaper and well within the budget of most SMB’s. Since virtualization was introduced 5 years ago, the Fortune 100 have been able to consolidate 1,000 servers into 200 servers or an 80% reduction. That’s a lot of hardware investment that was avoided translating to significant energy savings. Since the second major consolidation trend is called virtualization came out 4 years ago and is DEPENDENT ON a SAN foundation, I decided to help people put all the pieces of the puzzle together. If you are considering adopting virtualization technology, there are some underlying requirements that should be addressed. There are in fact two phases of consolidation: First storage consolidation; then server consolidation.
Einstein used to say that if you really understand your subject, you should be able to explain it to a three year old. Provide a high level view of the solutions, given our short time slot. This technology which has been around for almost a decade is now reaching a critical mass adoption rate. Over last 8 years, typical customer comment, “someone came in and set up our SAN two years ago and I haven’t touched it since”. It seems to runs by itself. Now they’re 80% cheaper and well within the budget of most SMB’s. Please see me after the presentation for more technical information or visit my website and enroll in my newsletter.
What triggered the SAN-movement 10 years ago was relentless data growth which showed no signs of slowing down. Before virtualization came around, the problem was managing the underlying data. Just like your closets get full, so do your hard drives. So how can we solve the data proliferation problem, with files and total capacity doubling every 12-18 months? Managing your stuff can consume significant IT resources. The bottom line is that most organizations cannot ignore this situation.
Over the years the choices and competition in the SAN market has grown, causing more complexity and less transparency in making appropriate decisions. First let’s consider the challenges. Top section: Every organization has it’s own ways of handling new IT. Bottom section: Depending on your current rate of growth, you may or may not have a data management problem, but the majority of organizations do. To replace existing servers with new ones to keep up with technology, becomes prohibitively expensive. Besides the cost of new hardware, there’s energy-consumption, sometimes referred to in today’s ecological terms as the carbon footprint. When you connect to the internet, you are attached to “computing clouds” as they are now referred, made up of thousands of physical and virtual servers.
It is not readily apparent that there are underlying infrastructure requirements that must be in place for virtualization solutions to work effectively. The common denominator is consolidation. Since virtualization was introduced 5 years ago, the Fortune 100 have been able to consolidate 1,000 servers into 200 servers or an 80% reduction. That’s a lot of hardware investment that was avoided and significant energy savings.
So far data cannot be stored in thin air. Maybe in 2012, but not today. Let’s get down to the bottom line. Your datacenter IS your business. If you lose it, you’ll likely go out of business. That is why publicly held companies have a fiduciary responsibility to ensure it is always up backed up in case of a disaster.
Technology for storing data is still dependent on a hard disk for storage. We will examine how companies are moving away from the DAS model. So we need ways of adding more disk capacity seamlessly. Can you see the Challenges with dedicated storage not being consolidated? It’s an all or nothing proposition, no partitioning a shelf of disks across multiple servers. Whereas all the servers can share a single array, all the capacity can be consolidated and centralized in one storage array. Consolidation - stacking those standalone disk shelves together-one big pool of disk capacity. When applications are busy the hard drives get full. Servers often times require more disk space. If a server is directly attached to a new shelf of disks (like the servers above), there is time to install hardware and drivers and at least one reboot of the system. If instead they were all sharing one array, like the bottom system we could connect all the hardware to a single dedicated network. If the bottom storage array were like a helium balloon machine for adding new disk capacity, each server would be able to have the new capacity it requires automatically, at any time and without disruption. The server could just inhale that helium and keep its applications running non-stop.
Traditional storage model, every server has captive storage shelves that they harbor for themselves. Virtualization also is a remedy for this. When new applications are installed on their own server, your are dedicating not only disks but also processors and memory to one application on a physical machine. This 1-to-1 model has led to server sprawl, keeping your IT staff even busier. Notice the external arrays are an all or nothing proposition for each server, meaning you cannot share disks between servers (as you can see there is no connection between them). Each server has its own shelf of disks whether or not it needs all the capacity. What remains unused or idle is a waste of resources. The servers attach to the entire shelf even if they only require half the space, and it’s usually difficult to predict when any of your servers might need more disk space. So utilization levels cannot be optimized using this model.
As the name implies, this is a network of storage – servers using a network to gain access to a set of disks. Servers require an HBA to convert the signal to optical and send the request to the switch which forwards it to storage controllers. It is very fast. The first phase of consolidation revolves around the disks, moving them to one central repository call the storage array. This is the capacity building block which houses ALL the disks. Add 15-disk shelves on-the-fly without disrupting anything on the SAN. Attached to a set of switches, known as the fabric with seamless redundancy . Distinguish fabric from SAN . It’s scalable with devices added dynamically without interruption of service. Separation of storage management from application management – app specialists contact storage specialist to provision more capacity. All right click operations. The switches vary in number of ports , the network speed and basic networking technology. Not all servers go on the SAN. A heterogeneous SAN means most platforms and applications can attach (depending on the compatibility matrix). You only buy as much as you need now, and add-on later. How do the servers acquire the storage from the array? The helium balloon machine is able to provide any size or color balloon that any server might want, and the servers can inhale the helium (ie disk space) without taking a breath (ie seamlessly without rebooting). That level of flexibility enhances the utilization levels of the array. Are all your servers SAN-worthy- do they require the fast connection speed and no downtime? Probably not.
Perhaps you do not require a full time SAN administrator in-house . Traditional RAID technology with automatic hot spares taking their place. (hot spares run by themselves) Further, the storage vendor immediately receives a notification to ship a replacement drive. (vendors keep the array running without you necessarily knowing a disk or other component failed.)
To review and compare our two main storage options again, notice the benefits of the SAN over the DAS. DAS is sort of like musical chairs, only one server can attach at a time which is a manual process. With DAS, adding disk space to a server is a manual process unlike the SAN that runs by itself. Of the two models, SANs are much faster, more reliable, with less management overhead. What does “runs by itself” mean? NSPOF= hardware is duplicated or redundant so if one part fails the other takes over. It also runs by itself through the user-friendly storage management interface which provides rapid disk volume provisioning.
This is a simple two-server diagram, but understand that you conceivably you could have up to around 200 servers on one SAN, depending on the design. $20,000 as starting point for an entry-level iSCSI SAN. Compatibility matrix explaining exactly what hardware and software are supported. Right sizing capacity limits and different disk types supported. My customers value my approach to simplifying their networks.
Two classes of switches – Pizza box switches shown here and Enterprise Director switches with hundreds of ports. The are differences in the switch speeds and port counts. You always want the fastest with as many ports as you can afford. Most important planning phase to make best choices. Traditional FC protocol or emerging iSCSI block protocol running on GigE network. Type of switch, the port density and speed. Not necessarily an either/or decision. Could incorporate two separate SANs into your datacenter. We are certified on Brocade and Cisco FC switches iSCSI environment for test/dev After the proof of concept is completed, we can help you determine if it would be wise to go with FC. Fibre channel switches send light or optical signals across the network, which makes SANs very fast. Performance vs costs tradeoff. Buy only what you need and grow organically. Per port cost of FC $500 vs iSCSI < $50
Each option has price/performance differences. The servers connect to the SAN through an initiator or hardware HBA. HBA’s translate the server protocol to a block protocol. Again we can help you select the best option for your datacenter.
During the engagement we will consider the best disk type based on application workloads. This could be a seminar in itself. http://wiki.emdstorage.com/Hardware/DriveInterfaceComparison http://www.webopedia.com/DidYouKnow/Computer_Science/2007/sas_sata.asp I’m not going to delve into the background of all these disk types, that look like Greek letters to some of you. But the main take away is that there are decisions about choosing the right disk that we can match to your requirements. Besides choosing the right disks, decisions also are required about the RAID protection level and application workload requirements. ATA= Advanced Technology Attachment SATA uses native command queuing SAS uses tagged command queuing
The key is understanding your server’s application workloads. When IOCG comes in to perform an assessment, we initially consider the workloads running on the servers. IOCG has tools that will analyze your bandwidth and IOPS. IOCG can help you understand how to work more effectively with the vendors.
Some of your data is daily mission-critical applications which cannot go down – Tier 1 which is your FC SAN option Other data is old and rarely used except for historical purposes- Tier 5 which is your backup tape. Then there are three intermediate stages which depend on server workload characteristics. We can analyze your server’s workloads and apply the right type of storage. Not every server in your environment is mission-critical, so IOCG will assess the types of servers and applications in your datacenter and right-size the disk type for you. This is part of the design process and feeds into the Double-Take message coming up.
The questions you should be asking yourself are shown here. Right size the storage array to meet future needs. Will the solution handle all my business needs? When will my backups take longer than overnight to complete? There are vendor roadmaps and compatibility matrices which one must consider before purchasing.
Just to quickly review – this is our Before picture. The musical chairs model where only one server can attach to an external disk shelf at a time.
To reveiw, by having all your servers sharing one storage array on a high-speed dedicated SAN, you will not only have a system that runs by itself, you will also have the necessary foundation to install a virtualization solution like VMware. Notice how the VMware server can share the same storage as Linux or Windows- again SAN’s are flexible.
Hyper-V RTM’d in July The three virtualization software solutions are very different in their maturity and feature set. VMware has captured the Fortune 100 market where robust performance and availability of mission-critical applications justifies the expense. Virtual Iron has captured the SMB market where cost savings is more important than cost, while maintaining many load balancing and high availability features. They also do not require as much Linux experience as VMware, and easy wizard-driven configuration. Hyper-V is has just been released to market after the beta version came out earlier in the year and is a free feature of Windows Server 2008. So it’s more of a comparison of apples and oranges at this stage until the newer entrants expand their products.
Here is a good example of a before and after virtualization illustration. You should be able to see the value proposition. There are tools that will migrate any of your physical servers to virtual servers. So instead of having one application running on each physical server, you have multiple “Virtual Servers” running on two or more high availability servers. Thereafter, if you ever need another server, you simply right click and clone a virtual server. This is on-the-fly server provisioning.
Phases of deploying virtualization: 1. Separating software from hardware with the ability to have multiple copies
VM’s replacing hardware- mobile between physical servers. Higher server utilization with efficient use of disks. Now let’s see what it looks like when it’s all put together. Again, by having all your servers sharing one storage array on a high-speed dedicated SAN, you will not only have a system that runs by itself, you will also have the necessary foundation to install a virtualization solution like VMware. These servers can be added on the fly, without disruptive other servers currently attached to the SAN. Notice how the VMware server can share the same storage as Linux or Windows- again SAN’s are flexible.
These are some of our vendor certifications.
Today I/O Continuity Group supports vendor-neutral solutions, meaning we focus on methodology and technology supporting best-of-breed solutions fitting all pieces of the puzzle into a well rounded solution. This is technology as it is, flaws and all, without any particular brand loyalty. If the network design employs a fundamental philosophy of using the best equipment for a particular function regardless of vendor, then the change is greatly simplified. But it does mean that you should have a migration plan. a vendor-neutral design philosophy forces you to use open, non-proprietary protocols. Without such a design philosophy, it is often impossible to introduce equipment from a new vendor.
We can do it all from assessment to design to deployment. Trusted advisor for thousands of satisfied customers meeting their business objectives Experience in every industry and all sizes of organizations Demystify technology, delivering highly-tailored, custom-designed solutions Integrate different vendor products into logical complete solution Certified on everything we deliver and know how to work with large vendors. Less overhead costs to pass on to the customer
Our business model is flexible. Irrespective of where your company resides on the process cycle, we can offer our end-to-end services. Then we will design the best solution working with your vendor or one that meets your needs. We can help you with the implementation, training and management or we can refer you to hosting companies how can manage the entire solution for you.
We can help with new or existing SANs and/or new or existing virtualization solutions. Please contact us for more information.
We would be happy to answer any company-specific questions after the Q&A session or by appointment.