To be considered a cloud, a solution must have the following key characteristics:Dynamic scale up and downSelf-servicePay-per-useTo help facilitate cloud deployments, the following are key cloud enablers:VirtualizationData mobilityMulti-tenancyDynamic provisioningData protectionData discoverabilityLocation independenceThe cloud architecture can be deployed in any of the following ways:Private: Hosted within an organization’s firewall and managed internally or externallyPublic: Hosted on the internet and managed by a providerHybrid: A combination of private and public
Historically, IT has focused on delivering infrastructure for each application. Our infrastructure cloud approach unifies your server, storage and network silos to improve utilization, simplify management and lower costs. Separating applications from underlying storage allows data to be moved freely according to usage, cost and application requirements with minimal impact to applications.As unstructured data overtakes structured data, our content cloud approach creates a warehouse to store billions of data objects. Intelligence makes it all indexable, searchable, and discoverable across applications and devices, anytime and anywhere. This allows you to cut costs associated with managing, storing and accessing data and automate the information lifecycle. Infrastructure and content form the foundation for the information cloud, which will help you repurpose and extract more value from your data and content. It integrates data across application silos and serves it up to analytics applications that connect data sets, reveal patterns across them, and surface actionable insights to business users. Underneath it all, our single virtualization platformensures your organization gets seamless access to all resources, data, content and information.
Cloud solution packageCan be purchased without the portal or service for customers and partnersCan leverage existing technology for investment protectionManagement portal (optional)Self-serviceBilling/meteringReportingHDS Managed Service (optional)Fully managed private cloudPay-per-use model
You know what to expect --- you know all the components will work together and performance testedPre-validated reference architectures, guides and servicesHitachi Data Systems tested-, end-to-end interoperability of compute, storage and network and MS validatedTypical and Max VM estimatedInfrastructure assures a consistent experience to the hosted workloadsStandardization of underlying physical servers, network devices and storage systems. Solutions based on a common cloud architecture that has already been tested and validatedTransformation workshop and planning services to help develop a roadmap for their private cloudRepeatable processes and outcomes built inRef Arch, guides and servicesTemplate based VM provisioningReliable Availability Architecture / Other: Separated host and mgmt clustersVirtual Machine High Availability: CB 2000 running Hyper-V and Failover clustering Virtual Machine Live Migration: Amin can live migrate a virtual machine from one blade in the cluster to another to balance workloads or prior to performing server maintenanceIntegrated management, so you can better monitor and managerAdapt to failures by utilizing management automation that detects and reacts to eventsHitachi provides monitoring packs for the Hitachi Compute Blade 2000 and the Hitachi Adaptable Modular Storage 2500. Allowing admins to be notified of any alerts that require attention. CB 2000: Sophisticated Blade Failover: N+1 cold standby function enables multiple servers to share a standby server, increasing system availability while decreasing the need for multiple standby servers. N+M standby function enables failover cascading which reduces total downtime by allowing the app workloads to be shared among working servers. All modules, including fans and power supplies, can be configured redundantly and hot swapped, maximizing system uptime. Accommodates up to four power supplies in the chassis and can be configured with mirrored power supplies, providing backup on each side of the chassis and higher reliability Each fan module includes three fans to tolerate fan failures within a module AMS 2500:Reliability of AMS 2500 and it’s active – active controller: point-to-point back-end design isolates any component failures that might occur on back-end I/O paths96TB backup pool available for use with MS Data Protection Manager or existing backup solutionHDP pools used to host CSV to isolate the VM OS I/O from the application data I/O. NetworkSAN Architecture that provides multiple paths from the hosts w/in CB 2000 to multiple ports on AMS 2000Host Storage Domains on the AMS2500 were used to ensure that each blades WWNs were configured to provide dedicated Fibre Channel ports to each Hyper-V Failover cluster Dedicated management network to provides a degree of separation for security and ease of managementDedicated Cluster Shared Volumes and cluster communication network to ensure that when storage connectivity is lost to CSVs due to a failure in the fiber channel network, the I/O can be re-directed using the cluster network
Architecture / Other: Scalable architecture Increase virtual machine density while maintaining performance through monitoring of systems to ensure bottlenecks are detected and corrected – Using System Center and Hitachi management tools.Clustered Shared Volumes (CSV) used for the Hyper-V Failover eliminates the limitation of 1 VM per LUN/ drive letter…allowing for multiple VMs to be placed on a single CSVCB 2000: High density computing and throughput by removing performance and I/O bottlenecks –Maximum number of CPU native memory slotsLargest supported blade I/O bandwidths in it’s classVirtualization to consolidate application and database servers for backbone systems, which have been difficult in the pastHigh number of VM/blade (32 Typical - 96 Max) Adaptable Modular Storage 2500:Symmetric active-active controllers that provide integrated, automated hardware-based front-to-back-end I/O load balancing. -Controllers can dynamically and automatically assign the access paths from the controller to the LU Utilization rates of each controller are monitored so that a more even distribution of workload The point-to-point back-end design virtually eliminates I/O transfer delays and contention associated with Fibre Channel arbitration and provides significantly higher bandwidth and I/O concurrency Dynamic Provisioning software provides a wide-striping technology that dramatically improves performance, capacity utilization and management An improved I/O “buffer” to burst into during peak usage timesA smoothing effect to the Virtual Machine workload that can eliminate hot spots, resulting in reduced virtual machine issues related to performance Minimization of excess, unused capacity by leveraging the combined capabilities of all disks comprising a storage poolNetworkiSCSI storage traffic - A physically separate network has been defined in this reference architecture for iSCSI storage traffic to provide higher throughput and performance Sub-netting is used to break the configuration into smaller more efficient networks A Live Migration network to ensure the high speed transfer of VMs between nodes in the Hyper-V failover cluster Microsoft MPIO software round-robin load balancing used to automatically selects a path by rotating through all available paths, thus balancing the load across all available paths and optimizing IOPS and response time
Architecture / Other: Speed deployment of an initial cloud – single vendor for ordering, guides, etc of a pre-validated reference architecture.Services to assist with planning, deployment, etcQuickly provision virtual machines with the assurance that the underlying infrastructure resources are in placeRapid provisioning through automation of complicated tasks such as provisioning a new LUN, then adding to a host cluster…etc. Self Serve VM provisioning - Administrators can delegate authority to other users to allow them to create virtual machines based on a set of predetermined templateRapidly grow the infrastructure to adapt to market pressures by leveraging the scalable design of the architecture. Automated and self-service VM provisioningRoadmap to further automation and orchestration through Hitachi converged data center solutionsCB 2000: Multi-blade SMP Feature Combine CPU, memory and I/O resources of multiple blades into one Modular design to support faster blade modification and redeployment AMS 2500:Storage administrators are no longer required to manually define specific affinities between LUs and controllers, simplifying overall administrationActive-Active controller design is fully integrated with standard host-based multipathing, thereby eliminating mandatory requirements to implement proprietary multipathing software DPS - Elimination of the need to manage the placement of virtual machines via a manual process. Allows for the automation of virtual machine creation and deployment, Network
Choice: Virtual Storage Platform:
This is the maturity model for the converged solutions. We plan to offer solutions with various level of integration based on customers needs. In Phase 1, we have plan to deliver reference architectures optimized for top applications like Exchange, Oracle and Virtualized environments such as Hyper-V Cloud and later Vmware. In Phase 2, we have plan to productlize key reference architecture to truly converged and orderable products. In Phase 3, we will deliver integration at the management level, orchestrating the various components across the technology boundaries. This is where UCP comes in. Phase 4 is when the technology can truly enable IT self service and the orchestration can be performed across multi-data centers, even to the Cloud.
Business Problem
OverviewInactive file content is moved off primary NAS into a cloud storeUsers maintain online access to contentFiles are accessed via primary NAS by way of a placeholder, or stub fileDelivered as CAPEX or OPEXBenefitsImprove performance and scalability of primary NASImprove backup and restore timesReduce OPEX and CAPEXImprove manageabilityHitachi data systems value propositionLow RiskHitachi Cloud Solution Packages help organizations of all sizes grow into cloud while protecting their existing investmentsOur cloud services and solutions packages help you with unstructured data in:NetApp, EMC Celerra and Microsoft Windows file server environments and Microsoft SharePoint farmsHitachi Cloud Solutions are delivered as OPEX if you want a fully managed, consumption-based cloud serviceHitachi Cloud Solution Packages are delivered as CAPEX if you want to manage and deliver your own cloud service internally or externallyEliminate Capital Expense Consumption based pricingJust in time storage supports unpredictable demands Eliminate oversubscription of storageReduce Operating Expense Move stale data to a fully managed environmentImprove efficiency and utilization of existing investmentReduce amount of backup media, licensing and management overhead requiredImprove Primary EnvironmentsReduce amount of storage in primary environmentImprove performance, scalability and manageabilityAchieve better ROAImprove Backup PerformanceReduce amount of storage to be backed up Improve backup and restore timesHitachi data systems key differentiatorsLower Total Cost of Ownership (TCO) by at least 25% with Hitachi Cloud Services for File TieringThe financial and operating benefits of the Hitachi Cloud Services for File Tiering are clear. When comparing estimates of business as usual (local NAS data storage TCO) to those of the Hitachi cloud solution, there is a significant cost benefit with the Hitachi cloud solution. With all configuration sizes, there is at least a 25% reduction in the unit-cost TCO for owning and managing the file and content environment. The table below shows the relative unit-cost (TCO/TB/Year) comparison for each configuration size. The TCO model consists of costs related to storage capacity, number of arrays managed, software, services, management labor, power, cooling, and depreciation and assumes a data growth rate of 30%, utilization rate of 66% and depreciation term of 4 years.
Business Problem
Hitachi data systems solution Reference architecture purpose built to apply latest Microsoft & HDS technologiesHitachi Data Discovery for MS SharePoint moves files to lower cost storageMS Active Directory integration ensures users access only their entitled dataHitachi Data Protection Suite provides enterprise-level backup and recoveryAMS Key FeaturesSymmetric active/active controller with dynamic load balancingSAS architectureOnline RAID group expansionLUN grow/shrinkMega-LunHigh Density Expansion TraysRAID-6Cache PartitioningMultiprotocol: FC SAN/iSCSIVolume Migration Modular softwareHitachi TrueCopy® Extended Distance softwareGREEN Storage initiativeImproved SecurityHNAS Key FeaturesPerformance, Availability and ScalabilityBest-in-class performance, near linear scale out up to 16PB1 and 256TB volumesIntelligent File TieringPolicy-based Hierarchical Storage Management feature that spans Hitachi NAS Platform and the Hitachi Content PlatformAdvanced Content IndexingProvides more efficient indexing operations and implements a data management API, which enables the Hitachi Data Discovery Suite to coordinate intelligent file tiering software operations based on file contentEnhanced High AvailabilityOptimized file system premount checks and improves NVRAM replay time to provide faster cluster failover times that mitigate unplanned downtimeNon-disruptive “rolling” upgrades limit planned downtime required for updates and upgradesSystem Management FrameworkComprehensive, centralized management with GUI interfaces: CLI, SNMP, LDAP, Active Directory with Auditing and NISVirtualization ServicesVirtual volumes (VVols), virtual servers and cluster namespace, which unify directory structure while simplifying storage capacity management tasksData Management ServicesCentralized GUI management, pointer-based snapshots, JetClone writable snapshots, quick file restore, hard and soft quotas (volume, group or user), scalable file systems, storage pools, and transparent data migration and relocation.Hardware Accelerated ProtocolsNFS, CIFS, iSCSI, NDMP, FTP, HTTP, TCP/IP and UDPData Protection ServicesActive-active clustering, from two to eight nodes with cluster read caching, for scalable, read intensive NFS workload, incremental block replication (IBR), JetMirror high speed replication and Metro Area ClusterComplemented by Hitachi Command Suite and Hitachi Command Director management softwareApplication ProtectorApplication Consistent Snapshots for Data ProtectionSupport for Hitachi Virtual Storage Platform, Hitachi Universal Storage Platform® V and VM, and Hitachi Adaptable Modular Storage systems
OverviewInactive SharePoint data is moved out of primary farm into a cloud storeUsers maintain online access to contentFiles are accessed via primary SharePoint by way of a placeholder, or stub fileDelivered as CAPEX or OPEXBenefitsImprove performance and scalability of SharePoint environmentSignificantly reduce backup time and volumeReduce SQL server capacity costsReduce OPEX through improved manageabilityHitachi data systems value propositionLow RiskHitachi Cloud Solution Packages help organizations of all sizes grow into cloud while protecting their existing investmentsOur cloud services and solutions packages help you with unstructured data in:NetApp, EMC Celerra and Microsoft Windows file server environments and Microsoft SharePoint farmsHitachi Cloud Solutions are delivered as OPEX if you want a fully managed, consumption-based cloud serviceHitachi Cloud Solution Packages are delivered as CAPEX if you want to manage and deliver your own cloud service internally or externallyEliminate Capital Expense Consumption based pricingJust in time storage supports unpredictable demands Eliminate oversubscription of storageReduce Operating Expense Move stale data to a fully managed environmentImprove efficiency and utilization of existing investmentReduce amount of backup media, licensing and management overhead requiredImprove Primary EnvironmentsReduce amount of storage in primary environmentImprove performance, scalability and manageabilityAchieve better ROAImprove Backup PerformanceReduce amount of storage to be backed up Improve backup and restore timesHitachi data systems key differentiatorsLower Total Cost of Ownership (TCO) by at least 25% with Hitachi Cloud Services for File TieringThe financial and operating benefits of the Hitachi Cloud Services for File Tiering are clear. When comparing estimates of business as usual (local NAS data storage TCO) to those of the Hitachi cloud solution, there is a significant cost benefit with the Hitachi cloud solution. With all configuration sizes, there is at least a 25% reduction in the unit-cost TCO for owning and managing the file and content environment. The table below shows the relative unit-cost (TCO/TB/Year) comparison for each configuration size. The TCO model consists of costs related to storage capacity, number of arrays managed, software, services, management labor, power, cooling, and depreciation and assumes a data growth rate of 30%, utilization rate of 66% and depreciation term of 4 years.
Business Problem
OverviewDeliver bottomless, backup-free file serving to distributed consumersLocal cache for fast retrieval of frequently accessed contentSupports multiple tenants and namespacesStandard NFS/CIFS on-ramp with AD and LDAP supportDelivered as CAPEX or OPEXBenefitsReduces cost and complexity by eliminating management or backup at the edgeMinimizes risk with compliance and retentionSupports on-demand capacity requestsEnables chargeback of consumers based on utilizationHitachi data systems value propositionLow RiskHitachi Cloud Solution Packages help organizations of all sizes grow into cloud while protecting their existing investmentsOur cloud services and solutions packages help you with unstructured data in:NetApp, EMC Celerra and Microsoft Windows file server environments and Microsoft SharePoint farmsHitachi Cloud Solutions are delivered as OPEX if you want a fully managed, consumption-based cloud serviceHitachi Cloud Solution Packages are delivered as CAPEX if you want to manage and deliver your own cloud service internally or externallyEliminate Capital Expense Consumption based pricingJust in time storage supports unpredictable demands Eliminate oversubscription of storageReduce Operating Expense Move stale data to a fully managed environmentImprove efficiency and utilization of existing investmentReduce amount of backup media, licensing and management overhead requiredImprove Primary EnvironmentsReduce amount of storage in primary environmentImprove performance, scalability and manageabilityAchieve better ROAImprove Backup PerformanceReduce amount of storage to be backed up Improve backup and restore timesHitachi data systems key differentiators
Benefits:Reduce CostsEliminate backupsImprove efficiency and utilizationSimplify ITReduce islands of storage and infrastructureStandard connection into the Hitachi Content PlatformReduce RiskSupports Compliance and Governance capabilitiesSecurity with AD and …Streamline CloudSupport multi-tenant, multi-namespace environmentsSeamless connection into a central “core” HCP