2. Reduced costs: By leveraging existing network components (network interface cards [NICs], switches, etc.) as a storage fabric, iSCSI increases the return on investment (ROI) made for data center network communications and potentially saves capital investments required to create a separate storage network. For example, iSCSI host bus adapters (HBAs) are 30-40% less expensive than Fibre Channel HBAs. Also, in some cases, 1 Gigabit Ethernet (GbE) switches are 50% less than comparable Fibre Channel switches.
3. Organizations employ qualified network administrator(s) or trained personnel to manage network operations. Being a network protocol, iSCSI leverages existing network administration knowledge bases, obviating the need for additional staff and educational training to manage a different storage network.
4. Improved options for DR: One of iSCSI's greatest strengths is its ability to travel long distances using IP wide area networks (WANs). Offsite data replication plays a key part in disaster recovery plans by preserving company data at a co-location that is protected by distance from a disaster affecting the original data center. Using a SAN router (iSCSI to Fibre Channel gateway device) and a target array that supports standard storage protocols (like Fibre Channel), iSCSI can replicate data from a local target array to a remote iSCSI target array, eliminating the need for costly Fibre Channel SAN infrastructure at the remote site.
5. iSCSI-based tiered storage solutions such as backup-to-disk (B2D) and near-line storage have become popular disaster recovery options. Using iSCSI in conjunction with Serial Advanced Technology Attachment (SATA) disk farms, B2D applications inexpensively back up, restore, and search data at rapid speeds.
6. Boot from SAN: As operating system (OS) images migrate to network storage, boot from SAN (BfS) becomes a reality, allowing chameleon-like servers to change application personalities based on business needs, while removing ties to Fibre Channel HBAs previously required for SAN connectivity (would still require hardware initiator).
9. Software Initiators: While software initiators offer cost-effective SAN connectivity, there are some issues to consider. The first is host resource consumption versus performance. An iSCSI initiator runs within the input/output (I/O) stack of the operating system, utilizing the host memory space and CPU for iSCSI protocol processing. By leveraging the host, an iSCSI initiator can outperform almost any hardware-based initiator. However, as more iSCSI packets are sent or received by the initiator, more memory and CPU bandwidth is consumed, leaving less for applications. Obviously, the amount of resource consumption is highly dependent on the host CPU, NIC, and initiator implementation, but resource consumption could be problematic in certain scenarios. Software iSCSI initiators can consume additional resource bandwidth that could be partitioned for supplemental virtual machines.
10. Hardware Initiators: iSCSI HBAs simplify boot-from-SAN (BfS). Because an iSCSI HBA is a combination NIC and initiator, it does not require assistance to boot from the SAN, unlike software initiator counterparts. By discovering a bootable target LUN during system power-on self test (POST), an iSCSI HBA can enable an OS to boot an iSCSI target like any DAS or Fibre Channel SAN-connected system. In terms of resource utilization, an iSCSI HBA offloads both TCP and iSCSI protocol processing, saving host CPU cycles and memory. In certain scenarios, like server virtualization, an iSCSI HBA may be the only choice where CPU processing power is consequential.
12. Software Targets: Any standard server can be used as a software target storage array but should be deployed as a stand-alone application. A software target can capitalize platform resources, leaving little room for additional applications.
13. Hardware Targets: Many of the iSCSI disk array platforms are built using the same storage platform as their Fibre Channel cousin. Thus, many iSCSI storage arrays are similar, if not identical, to Fibre Channel arrays in terms of reliability, scalability, performance, and management. Other than the controller interface, the remaining product features are almost identical.
15. Tape libraries should be capable of being iSCSI target devices, however broad adoption and support in this category hasn’t been seen and remains a territory served by native Fiber Channel connectivity.
17. iSCSI to Fibre Channel gateways and routers play a vital role in two ways. First, these devices increase return on invested capital made in Fibre Channel SANs by extending connectivity to “Ethernet islands” where devices that were previously unable to reach the Fibre Channel SAN can tunnel through using a router or gateway. Secondly, iSCSI routers and gateways enable Fibre Channel to iSCSI migration. SAN migration is a gradual process. Replacing a large investment in Fibre Channel SANs at one time is not a cost reality. As IT administrators carefully migrate from one interconnect to another, iSCSI gateways and routers afford IT administrators the luxury of time and money. One note of caution: It's important to know the port speeds and amount of traffic passing through a gateway or router. These devices can become potential bottlenecks if too much traffic from one network is aggregated into another. For example, some router products offer eight 1 GbE ports and only two 4 Gb Fibre Channel ports. While total throughput is the same, careful attention must be made to ensure traffic is evenly distributed across ports.
18. Any x86 server can act as an iSCSI to Fibre Channel gateway. Using a Fibre Channel HBA and iSCSI target software, any x86 server can present LUNs from a Fibre Channel SAN as an iSCSI target. Once again, this is not a turnkey solution—especially for large SANs—and caution should be exercised to prevent performance bottlenecks. However, this configuration can be cost-effective for small environments and connectivity to a single Fibre Channel target or small SAN.
20. Voracious storage consumption, combined with lower-cost SAN devices, has stimulated SAN growth beyond what administrators can manage without help. iSCSI exacerbates this problem by proliferating iSCSI initiators and low-cost target devices throughout a boundless IP network. Thus, a discovery and configuration service, like iSNS is a must for large SAN configurations. Although other discovery services exist for iSCSI SANs, such as Service Location Protocol (SLP), iSNS is emerging as the most widely accepted solution.
40. Hot/Cold Aisle Containment: Arranging equipment racks in rows that allow for the supply of cold air to the front of racks and exhaust of hot air at the rear. Adjacent rows would have opposite airflow to provide only one set of supply or exhaust ducts. Some very dense rack configurations may require the use of chimney exhaust above the racks to channel hot air away from the cold air supply. The key design component is to not allow hot air exhaust to mix with cold air supply and diminish its overall effectiveness. Containment is achieved through enclosed cabinet panels, end of row wall or panel structures, or plastic sheet curtains.
41.
42. Most new Data Centers constructed today include some sort electronic locking system. These can take the form of simple, offline keypad locks, to highly complex systems that include access portals (aka, man traps) and anti-tailgating systems. Electronic lock systems allow the flexability to issue and revoke access instantaneously, or nearly so, depending on the product. Online systems (sometimes refered to as hardwired systems) consist of an access control panel that connects to a set of doors and readers of various types using wiring run through the building. Offline systems consist of locks that have a reader integrated into the lock, a battery and all of the electronics to make access determinations. Updates to these sorts of locks are usually done through some sort of hand-held device that is plugged into the lock.
43. There are two fairly common reader technologies in use today. One is magnetic stripe based. These systems usually read data encoded on tracks two or three. While the technology is mature and stable, it has a few weaknesses. The data on the cards can be easily duplicated with equipment easily purchased on the Internet. The magnetic stripe can wear out or become erased if it gets close to a magnetic field. One option to improving the security of magnetic swipe installations is the use of a dual-validation reader, where after swiping your card, the user must enter a PIN code before the lock will open.
44. The other common access token in use today is the proximity card, also called a RFID card. These cards have an integrated circuit (IC), capacitor and wire coil inside of them. When the coil is placed near a reader, the energy field emitted by the reader produces a charge in the capacitor, which powers the IC. Once powered, the IC transmits it's information to the reader and the reader or control panel that it communicates with, determines if you should gain access.
45. Beyond access control, the other big advantage to electronic locking systems is their ability to provide an audit trail. The system will keep track of all credentials presented to the reader, and the resulting outcome of that presentation - access was either granted or denied. Complex access control systems will even allow you to do things such as implement a two-man rule, where two people must present authorized credentials before a lock will open, or anti-passback.
46. Anti-passback systems require a user to present credentials to both enter or exit a given space. Obviously, locking someone into a room would be a life safety issue, so usually, some sort of alarm is sounded on egress if proper credentials were not presented. Anti-passback also allows you to track where individuals are at any given time, because the system knows that they presented credentials to exit a space.Commissioning<br />Commissioning is essential to have validation of the design, verification of load capacities, and testing of failover mechanisms. A commissioning agent can identify design flaws, single-points of failure, and inconsistencies in the build-out from the original design. Normally a commissioning agent would be independent from the design or build team.<br />A commissioning agent will inspect for such things as proper wiring, pipe sizes, weight loads, chiller and pump capacities, electrical distribution panels and switch gear. They will test battery run times, UPS and generator step loads, and air conditioning. They will simulate load with resistive coils to generate heat and UPS draw and go through a play-book of what-if scenarios to test all aspects of redundant systems.<br />Load Balancing/High Availability<br />Connectivity<br />Network<br />Network components in the data center—such as Layer 3 backbone switches, WAN edge routers, perimeter firewalls, and wireless access points—are described in the ITRP2 Network Baseline Standard Architecture and Design document, developed by the Network Technology Alliance, sister committee to the Systems Technology Alliance. Latest versions of the standard can be located at http://nta.calstate.edu/ITRP2.shtml.<br />Increasingly, boundaries are blurring between systems and networks. Virtualization is causing an abstraction of traditional networking components and moving them into software and the hypervisor layer. Virtual switches<br />Considerations beyond “common services”<br />The following components have elements of network enabling services but are also systems-oriented and may be managed by the systems or applications groups. <br />DNS<br />For privacy and security reasons, many large enterprises choose to make only a limited subset of their systems “visible” to external parties on the public Internet. This can be accomplished by creating a separate Domain Name System (DNS) server with entries for these systems, and locating it where it can be readily accessible by any external user on the Internet (e.g., locating it in a DMZ LAN behind external firewalls to the public Internet). Other DNS servers containing records for internally accessible enterprise resources may be provided as “infrastructure servers” hidden behind additional firewalls in “trusted” zones in the data center. This division of responsibility permits the DNS server with records for externally visible enterprise systems to be exposed to the public Internet, while reducing the security exposure of DNS servers containing the records of internal enterprise systems.<br />E-Mail (MTA only) <br />For security reasons, large enterprises may choose to distribute e-mail functionality across different types of e-mail servers. A message transfer agent (MTA) server that only forwards Simple Mail Transfer Protocol (SMTP) traffic (i.e., no mailboxes are contained within it) can be located where it is readily accessible to other enterprise e-mail servers on the Internet. For example, it can be located in a DMZ LAN behind external firewalls to the public Internet). Other e-mail servers containing user agent (UA) mailboxes for enterprise users may be provided as “infrastructure servers” located behind additional firewalls in “trusted” zones in the data center. This division of responsibility permits the “external” MTA server to communicate with any other e-mail server on the public Internet, but reduces the security exposure of “internal” UA e-mail servers.<br />Voice Media Gateway <br />The data center site media gateway will include analog or digital voice ports for access to the local PSTN, possibly including integrated services digital network (ISDN) ports.<br />With Ethernet IP phones, the VoIP gateway is used for data center site phone users to gain local dial access to the PSTN. The VoIP media gateway converts voice calls between packetized IP voice traffic on a data center site network and local circuit-switched telephone service. With this configuration, the VoIP media gateway operates under the control of a call control server located at the data center site, or out in the ISP public network as part of an “IP Centrex” or “virtual PBX” service. However, network operators/carriers increasingly are providing a SIP trunking interface between their IP networks and the PSTN; this will permit enterprises to send VoIP calls across IP WANs to communicate with PSTN devices without the need for a voice media gateway or direct PSTN interface. Instead, data center site voice calls can be routed through the site’s WAN edge IP routers and data network access links.<br />Ethernet L2 Virtual Switch<br />In a virtual server environment, the hypervisor manages L2 connections from virtual hosts to the NIC(s) of the physical server.<br />A hypervisor plug-in module may be available to allow the switching characteristics to emulate a specific type of L2 switch so that it can be managed apart from the hypervisor and incorporated into the enterprise NMS.<br />Top-of-Rack Fabric Switches<br />As a method of consolidating and aggregating connections from dense rack configurations in the data center, top-of-rack switching has emerged as a way to provide both Ethernet and Fiber Channel connectivity in one platform. Generally, these devices connect to end-of-row switches that, optimally, can manage all downstream devices as one switching fabric. The benefits are a modularized approach to server and storage networks, reduced cross connects and better cable management.<br />Network Virtualization<br />Structured Cabling<br />The CSU has developed a set of standards for infrastructure planning that should serve as a starting place for designing cabling systems and other utilities serving the data center. These Telecommunications Infrastructure Planning (TIP) standards can be referenced at the following link: http://www.calstate.edu/cpdc/ae/gsf/TIP_Guidelines/<br />There is also a NTA working group specific to networking that regards cabling infrastructure, known as the Infrastructure Physical Plant Working Group (IPPWG). Information about the working group can be found at the following link: http://nta.calstate.edu/NTA_working_groups/IPP/<br />The approach to structured cabling in a data center differs from other aspects of building wiring due to the following issues:<br />Managing higher densities, particularly fiber optics<br />Cable management, especially with regard to moves, adds and changes<br />Heat control, for which cable management plays a role<br />The following are components of structured cabling design in the data center:<br />Cable types: Cabling may be copper (shielded or unshielded) or fiber optic (single mode or multi mode).<br />Cabling pathways: usually a combination of raised floor access and overhead cable tray. Cables under raised floor should be in channels that protect them from adjacent systems, such as power and fire suppression.<br />Fiber ducts: fiber optic cabling has specific stress and bend radius requirements to protect the transmission of light and duct systems designed for fiber takes into account the proper routing and storage of strands, pigtails and patchcords among the distribution frames and splice cabinets.<br />Fiber connector types: usually MT-RJ, LC, SC or ST. The use of modular fiber “cassettes” and trunk cables allows for higher densities and the benefit of factory terminations rather than terminations in the field, which can be time-consuming and subject to higher dB loss.<br />Cable management:<br />Operations<br />Information Technology (IT) operations refers to the day-to-day management of an IT infrastructure. An IT operation incorporates all the work required to keep a system running smoothly. This process typically includes the introduction and control of small changes to the system, such as mailbox moves and hardware upgrades, but it does not affect the overall system design. Operational support includes systems monitoring, network monitoring, problem determination, problem reporting, problem escalation, operating system upgrades, change control, version management, backup and recovery, capacity planning, performance tuning and system programming. <br />The mission of data center operations is to provide the highest possible quality of central computing support for the campus community and to maximize availability central computing systems. <br />Data center operations services include: <br />Help Desk Support <br />Network Management <br />Data Center Management <br />Server Management <br />Application Management <br />Database Administration <br />Web Infrastructure Management <br />Systems Integration <br />Business Continuity Planning <br />Disaster Recovery Planning<br />Email Administration<br />Staffing<br />Staffing is the process of acquiring, deploying, and retaining a workforce of sufficient quantity and quality maximize the organizational effectiveness of the data center.<br />Training<br />Training is not simply a support function, but a strategic element in achieving an organization’s objectives. <br />IT Training Management Processes and Sample Practices<br />Management ProcessesSample PracticesAlign IT training with business goals.Enlist executive-level champions.Involve critical stakeholders.Identify and assess IT training needs.Document competencies/skills required for each job description.Perform a gap analysis to determine needed training.Allocate IT training resources.Use an investment process to select and manage training projects.Provide resources for management training, e.g., leadership and project management.Design and deliver IT training.Give trainees choice among different training delivery methods.Build courses using reusable components. Evaluate/demonstrate the value of IT training.Collect information on how job performance is affected by training.Assess evaluation results in terms of business impact<br />Monitoring<br />Monitoring is a critical element of data center asset management and covers a wide spectrum of issues such as system availability, system performance levels, component serviceability and timely detection of system operational or security problems such as disk capacity exceeding defined thresholds or system binary files being modified, etc.<br /> <br />Automation<br />Automation of routine data center tasks reduces staffing headcount by using tools such as automated tape backup systems that auto load magnetic media from tape libraries sending backup status and exception reports to data center staff. The potential for automating routine tasks is limitless. Automation increases reliability and frees staff from routine tasks so that continuous improvement of operations can occur.<br />Console Management<br />To the extent possible console management should integrate the management of heterogeneous systems using orchestration or a common management console.<br />Remote Operations<br />Lights out operations are facilitated by effective remote operations tools. This leverages the economy of scales enjoyed by managing multiple remote production data centers from a single location that may be dynamically assigned in manner such as “follow the sun.”<br />Accounting<br />Auditing<br />The CSU publishes findings and campus responses to information security audits. Reports can be found at the following site: http://www.calstate.edu/audit/audit_reports/information_security/index.shtml<br />Disaster Recovery<br />Relationship to overall campus strategy for Business Continuity<br />Campuses should already have a business continuity plan, which typically includes a business impact analysis (BIA) to monetize the effects of interrupted processes and system outages. Deducing a maximum allowable downtime through this exercise will inform service and operational level agreements, as well as establish recovery time and point objectives, discussed in section 2.7.4.1 Backup and Recovery.<br />Relationship to CSU Remote Backup – DR initiative<br />ITAC has sponsored an initiative to explore business continuity and disaster recovery partnerships between CSU campuses. [Charter document?] Several campuses have teamed to develop documents and procedures and their workproduct is posted at http://drp.sharepointsite.net/itacdrp/default.aspx. <br />Examples of operational considerations, memorandums of understanding, and network diagrams are given in Section 3.5.4.2<br />Infrastructure considerations<br />Site availability<br />Disaster recovery planning should account for short-, medium-, and long-term disaster and disruption scenarios, including impact and accessibility to the data center. Consideration should be given to location, size, capacity, and utilities necessary to recover the level of service required by the critical business functions. Attention should be given to structural, mechanical, electrical, plumbing and control systems and should also include planning for workspace, telephones, workstations, network connectivity, etc.<br />Alternate sites could be geographically diverse locations on the same campus, locations on other campuses (perhaps as part of a reciprocal agreement between campuses to recover each other’s basic operations), or commercially available co-location facilities described in Section 2.5.3.2.<br />When determining an alternate site, management should consider scalability, in the event a long-term disaster becomes a reality. The plan should include logistical procedures for accessing backup data as well as moving personnel to the recovery location. <br />Co-location<br />One method of accomplishing business continuity objectives through redundancy with geographic diversity is to use a co-location scenario, either through a reciprocal agreement with another campus or a commercial provider. The following are typical types of collocation arrangements:<br />Real estate investment trusts (REITs): REITs offer leased shared data center facilities in a business model that leverages tax laws to offer savings to customers. <br />Network-neutral co-location: Network-neutral co-location providers offer leased rack space, power, and cooling with the added service of peer-to-peer network cross-connection.<br />Co-location within hosting center: Hosting centers may offer co-location as a basic service with the ability to upgrade to various levels of managed hosting.<br />Unmanaged hosted services: Hosting centers may offer a form of semi-co-location wherein the hosting provider owns and maintains the server hardware for the customer, but doesn't manage the operating system or applications/services that run on that hardware.<br />Principles for co-location selection criteria<br />Business process includes or provides an e-commerce solution<br />Business process does not contain applications and services that were developed and are maintained in-house<br />Business process does not predominantly include internal infrastructure or support services that are not web-based<br />Business process contain predominantly commodity and horizontal applications and services (such as email and database systems)<br />Business process requires geographically distant locations for disaster recovery or business continuity<br />Co-location facility meets level of reliability objective (Tier I, II, III, or IV) at less cost than retrofitting or building new campus data centers<br />Access to particular IT staff skills and bandwidth of the current IT staffers<br />Level of SLA matches the campus requirements, including those for disaster recovery<br />Co-location provider can accommodate regulatory auditing and reporting for the business process<br />Current data center facilities have run out of space, power, or cooling<br />[concepts from Burton Group article, “Host, Co-Lo, or Do-It-Yourself?”]<br />Operational considerations<br />Recovery Time Objectives and Recovery Point Objectives discussed in 2.7.4.1 (Backup and Recovery<br />Total Enterprise Virtualization<br />Management Disciplines<br />Service Management<br />IT service management is the integrated set of activities required to ensure the cost and quality of IT services valued by the customer. It is the management of customer-valued IT capabilities through effective processes, organization, information and technology, including:<br />Aligning IT with business objectives<br />Managing IT services and solutions throughout their lifecycles<br />Service management processes like those described in ITIL, ISO/IEC 20000, or IBM’s Process Reference Model for IT.<br />Service Catalog<br />An IT Service Catalog defines the services that an IT organization is delivering to the business users and serves to align the business requirements with IT capabilities, communicate IT services to the business community, plan demand for these services, and orchestrate the delivery of these services across the functionally distributed (and, oftentimes, multi-sourced) IT organization. An effective Service Catalog also segments the customers who may access the catalog - whether end users or business unit executives - and provides different content based on function, roles, needs, locations, and entitlements. <br />The most important requirement for any Service Catalog is that it should be business-oriented, with services articulated in business terms. In following this principle, the Service Catalog can provide a vehicle for communicating and marketing IT services to both business decision-makers and end users. <br />The ITIL framework distinguishes between these groups as quot;
customersquot;
(the business executives who fund the IT budget) and quot;
usersquot;
(the consumers of day-to-day IT service deliverables). The satisfaction of both customers and users is equally important, yet it's important to recognize that these are two very distinct and different audiences.<br />To be successful, the IT Service Catalog must be focused on addressing the unique requirements for each of these business segments. Depending on the audience, they will require a very different view into the Service Catalog. IT organizations should consider a two-pronged approach to creating an actionable Service Catalog:<br />The executive-level, service portfolio view of the Service Catalog used by business unit executives to understand how IT's portfolio of service offerings map to business unit needs. This is referred to in this article as the quot;
service portfolio.quot;
<br />The employee-centric, request-oriented view of the Service Catalog that is used by end users (and even other IT staff members) to browse for the services required and submit requests for IT services. For the purposes of this article, this view is referred to as a quot;
service request catalog.quot;
<br />As described above, a Service Request Catalog should look like consumer catalogs, with easy-to-understand descriptions and an intuitive store-front interface for browsing available service offerings. This customer-focused approach helps ensure that the Service Request Catalog is adopted by end users. The Service Portfolio provides the basis for a balanced, business-level discussion on service quality and cost trade-offs with business decision-makers.<br />To that end, service catalogs should extend beyond a mere list of services offered and can be used to facilitate:<br />IT best practices, captured as Service Catalog templates <br />Operational Level Agreements, Service Level Agreements (aligning internal & external customer expectations) <br />Hierarchical and modular service models <br />Catalogs of supporting and underlying infrastructures and dependencies (including direct links into the CMDB) <br />Demand management and capacity planning <br />Service request, configuration, validation, and approval processes <br />Workflow-driven provisioning of services <br />Key performance indicator (KPI)-based reporting and compliance auditing<br />Service Level Agreements<br />The existence of a quality service level agreement is of fundamental importance for any service or product delivery of any importance. It essentially defines the formal relationship between the supplier and the recipient, and is NOT an area for short-cutting. This is an area which too often is not given sufficient attention. This can lead to serious problems with the relationship, and indeed, serious issues with respect to the service itself and potentially the business itself.<br />It will embrace all key issues, and typically will define and/or cover: <br />The services to be delivered <br />Performance, Tracking and Reporting Mechanisms<br />Problem Management Procedures <br />Dispute Resolution Procedures <br />The Recipient's Duties and Responsibilities <br />Security <br />Legislative Compliance <br />Intellectual Property and Confidential Information Issues<br />Agreement Termination <br />Project Management<br />An organization’s ability to effectively manage projects allows it to adapt to changes and succeed in activities such as system conversions, infrastructure upgrades and system maintenance. A project management system should employ well-defined and proven techniques for managing projects at all stages, including:<br />Initiation<br />Planning<br />Execution<br />Control<br />Close-out<br />Project monitoring will include:<br />Target completion dates – realistically set for each task or phase to improve project control.<br />Project status updates – measured against original targets to assess time and cost overruns.<br />Stakeholders and IT staff should collaborate on defining project requirements, budget, resources, critical success factors, and risk assessment, as well as a transition plan from the implementation team to the operational team.<br />Change Management<br />Change Management addresses routine maintenance and periodic modification of hardware, software and related documentation. It is a core component of a functional ITIL process as well. Functions associated with change management are:<br />Major modifications: significant functional changes to an existing system, or converting to or implementing a new system; usually involves detailed file mapping, rigorous testing, and training.<br />Routine modifications: changes to applications or operating systems to improve performance, correct problems or enhance security; usually not of the magnitude of major modifications and can be performed in the normal course of business.<br />Emergency modifications: periodically needed to correct software problems or restore operations quickly. Change procedures should be similar to routine modifications but include abbreviated change request, evaluation and approval procedures to allow for expedited action. Controls should be designed so that management completes detailed evaluation and documentation as soon as possible after implementation.<br />Patch management: similar to routine modifications, but relating to externally developed software.<br />Library controls: provide ways to manage the movement of programs and files between collections of information, typically segregated by the type of stored information, such as for development, test and production.<br />Utility controls: restricts the use of programs used for file maintenance, debugging, and management of storage and operating systems.<br />Documentation maintenance: identifies document authoring, approving and formatting requirements and establishes primary document custodians. Effective documentation allows administrators to maintain and update systems efficiently and to identify and correct programming defects, and also provides end users access to operations manuals.<br />Communication plan: change standards should include communication procedures that ensure management notifies affected parties of changes. An oversight or change control committee can help clarify requirements and make departments or divisions aware of pending changes.<br />[concepts from FFIEC Development and Acquisition handbook]<br />Configuration Management<br />Configuration Management is the process of creating and maintaining an up to date record of all components of the infrastructure.<br />Functions associated with Configuration Management are:<br />Planning <br />Identification <br />Control <br />Status Accounting <br />Verification and Audit <br />Configuration Management Database (CMDB) - A database that contains details about the attributes and history of each Configuration Item and details of the important relationships between CI’s. The information held may be in a variety of formats, textual, diagrammatic, photographic, etc.; effectively a data map of the physical reality of IT Infrastructure.<br />Configuration Item - Any component of an IT Infrastructure which is (or is to be) under the control of Configuration Management.<br />The lowest level CI is normally the smallest unit that will be changed independently of other components. CI’s may vary widely in complexity, size and type, from an entire service (including all its hardware, software, documentation, etc.) to a single program module or a minor hardware component. <br />Data Management<br />Backup and Recovery<br />Concepts<br />Recovery Time Objective, or RTO, is the duration of time in which a set of data, a server, business process etc. must be restored by. For example, a highly visible server such as a campus' main web server may need to be up and running again in a matter of seconds, as the business impact if that service is down is high. Conversely, a server with low visibility, such as a server used in software QA, may have a RTO of a few hours.<br />Recovery Point Objective, or RPO, is the acceptable amount of data loss a business can tolerate, measured in time. In other words, this is the point in time before a data loss event occurred, at which data may be successfully recovered. For less critical systems, it may be acceptable to recover to the most recent backup taken at the end of the business day, whereas highly critical systems may have a RPO of an hour or only a few minutes. RPO and RTO go hand-in-hand in developing your data protection plan.<br />Deduplication:<br />Source deduplication - Source deduplication means that the deduplication work is done up-front by the client being backed up.<br />Target deduplication - Target deduplication is where the deduplication processing is done by the backup appliance and/or server. There tend to be two forms of target deduplication: in-line and post-process.<br />In-line deduplication devices decide whether or not they have seen the data before writing it out to disk.<br />Post-process deduplication devices write all of the data to disk, and then at some later point, analyze that data to find duplicate blocks.<br />Backup types<br />Full backups - Full backups are a backup of a device that includes all data required to restore that device to the point in time at which the backup was performed.<br />Incremental backups - Incremental backups backup the changed data set since the last full backup of the system was performed. There does not seem to be any industry standards when you compare one vendor's style of incremental to another. In fact, some vendors include multiple styles of incrementals that a backup administrator may choose from.<br />A cumulative incremental backup is a style of incremental backup where the data set contains all data changed since the last full backup.<br />A differential incremental backup is a style of incremental backup where the data set contains all data changed since the previous backup, whether it be a full or another differential incremental.<br />Tape Media - There are many tape formats to choose from when looking at tape backup purchases. They range from open-standards (many vendors sell compatible drives) to single-vendor or legacy technologies.<br />DLT - Digital Linear Tape, or DLT, was originally developed by Digital Equipment Corporation in 1984. The technology was later purchased by Quantum in 1994. Quantum licenses the technology to other manufacturers, as well as manufacturing their own drives.<br />LTO - Linear Tape Open, or LTO, is a tape technology developed by a consortium of companies in order to compete with proprietary tape formats in use at the time.<br />DAT/DDS - Digital Data Store, or DDS, is a tape technology that evolved from Digital Audio Tape, or DAT technology.<br />AIT - Advanced Intelligent Tape, or AIT, is a tape technology developed by Sony in the late 1990's.<br />STK/IBM - StorageTek and IBM have created several proprietary tape formats that are usually found in large, mainframe environments.<br />Methods<br />Disk-to-Tape (D2T)- Disk-to-tape is what most system administrators think of when they think of backups, as it has been the most common backup method in the past. The data typically moves from the client machine through some backup server to an attached tape drive. Writing data to tape is typically faster than reading the data from the tape.<br />Disk-to-Disk (D2D) - With the dramatic drop in hard drive prices over the recent years, disk-to-disk methods and technologies have become more popular. The big advantage they have over the traditional tape method, is speed in both the writing and reading of data. Some options available in the disk-to-disk technology space:<br />VTL - Virtual Tape Libraries, or VTLs, are a class of disk-to-disk backup devices where a disk array and software appear as a tape library to your backup software.<br />Standard disk array - Many enterprise backup software packages available today support writing data to attached disk devices instead of a tape drive. One advantage to this method is that you don't have to purchase a special device in order to gain the speed benefits of disk-to-disk technology.<br />Disk-to-Disk-to-Tape (D2D2T) - Disk-to-disk-to-tape is a combination of the previous two methods. This practice combines the best of both worlds - speed benefits from using disk as your backup target, and tape's value in long-term and off-site storage practices. Many specialized D2D appliances have some support for pushing their images off to tape. Backup applications that support disk targets, also tend to support migrating their images to tape at a later date.<br />Snapshots - A snapshot is a copy of a set of files and directories as they were at a particular moment in time. On a server operating system, the snapshot is usually taken by either the logical volume manager (LVM) or the file system driver. File system snapshots tend to be more space-efficient than their LVM counterpart. Most storage arrays come with some sort of snapshot capabilities either as base features, or as licenseable add-ons.<br />VM images – In a virtualized environment, backup agents may be installed on the virtual host and file level backups invoked in a conventional method. Backing up each virtual instance as a file at the hypervisor level is another consideration. A prime consideration in architecting backup strategies in a virtual environment is the use of a proxy server or intermediate staging server to handle snapshots of active systems. Such proxies allow for the virtual host instance to be staged for backup without having to quiesce or reboot the VM. Depending on the platform and the OS, it may also be possible to achieve file-level restores within the VM while backing up the entire VM as a file.<br />Replication<br />On-site - On-site replication is useful if you are trying to protect against device failure. You would typically purchase identical storage arrays and then configure them to mirror the data between them. This does not, however, protect against some sort of disaster that takes out your entire data center.<br /> Off-site - Off-site implies that you are replicating your data to a similar device located away from your campus. Technically, off-site could mean something as simple as a different building on your campus, but generally this term implies some geo-diversity to the configuration.<br /> Synchronous vs. Asynchronous - Synchronous replication guarantees zero data-loss by performing atomic writes. In other words, the data is written to the arrays that are part of the replication configuration, or none of them. A write request is not considered complete until acknowledged by all storage arrays. Depending on your application and the distance between your local and remote arrays, synchronous replication can cause performance impacts, since the application may wait until it has been informed by the OS that the write is complete. Asynchronous replication gets around this by acknowledging the write as soon as the local storage array has written the data. Asynchronous replication may increase performance, but it can contribute to data loss if the local array fails before the remote array has received all data updates.<br />In-band vs. Out-of-band - In-band replication refers to replication capabilities built into the storage device. Out-of-band can be accomplished with an appliance, software installed on a server or quot;
in the networkquot;
, usually in the form of a module or licensed feature installed into a storage router or switch.<br />Tape Rotation and Aging Strategies<br />Grandfather, father, son - From Wikipedia: quot;
Grandfather-father-son backup refers to the most common rotation scheme for rotating backup media. Originally designed for tape backup, it works well for any hierarchical backup strategy. The basic method is to define three sets of backups, such as daily, weekly and monthly. The daily, or son, backups are rotated on a daily basis with one graduating to father status each week. The weekly or father backups are rotated on a weekly basis with one graduating to grandfather status each month.quot;
<br />Offsite vaults - Vaulting, or moving media from on-site to an off-site storage facility, is usually done with some sort of full backup. The media sent off-site can either be the original copy or a duplicate, but it is common to have at least one copy of the media being sent rather than sending your only copy. The amount of time it takes to retrieve a given piece of media should be taken into consideration when calculating and planning for your RTO.<br />Retention policies: The CSU maintains a website with links and resources to help campuses comply with requirements contained in Executive Order 1031, the CSU Records/Information Retention and Disposition Schedules. The objective of the executive order is to ensure compliance with legal and regulatory requirements while implementing appropriate operational best practices. The site is located at http://www.calstate.edu/recordsretention.<br />Tape library: A tape library is a device which usually holds multiple tapes, multiple tape drives and has a robot to move tapes between the various slots and drives. A library can help automate the process of switching tapes so that an administrator doesn't have to spend several hours every week changing out tapes in the backup system. A large tape library can also allow you to consolidate various media formats in use in an environment into a single device (ie, mixing DLT and LTO tapes and drives).<br />Disk Backup appliances/arrays: some vendor backup solutions may implement the use of a dedicated storage appliance or array that is optimized for their particular backup scheme. In the case of incorporating deduplication into the backup platform, a dedicated appliance may be involved for handling the indexing of the bit-level data.<br />Archiving<br />Media Lifecycle<br />Destruction of expired data<br />Hierarchical Storage Management<br />Document Management<br />Asset Management<br />Effective data center asset management is necessary for both regulatory and contractual compliance. It can improve life cycle management, and facilitate inventory reductions by identifying under-utilized hardware and software, potentially resulting in significant cost savings. An effective management process requires combining current Information Technology Infrastructure Library (ITIL) and Information Technology Asset Management (ITAM) best practices with accurate asset information, ongoing governance and asset management tools. The best systems/tools should be capable of asset discovery, manage all aspects of the assets, including physical, financial and contractual, life cycle management with Web interfaces for real time access to the data. Recognizing that sophisticated systems may be prohibitively expensive, asset management for smaller environments may be able to be managed by spreadsheets or simple database. Optimally, a system that could be shared among campuses while maintaining restricted permission levels, would allow for more comprehensive and uniform participation, such as the Network Infrastrucure Asset Management System (NIAMS), http://www.calstate.edu/tis/cass/niams.shtml<br />The following are asset categories to be considered in a management system:<br />Physical Assets – to include the grid, floor space, tile space, racks and cables. The layout of space and the utilization of the attributes above are literally an asset that needs to be tracked both logically and physically.<br />Network Assets – to include routers, switches, firewalls, load balancers, and other network related appliances. <br />Storage Assets – to include Storage Area Networks (SAN), Network Attached Storage (NAS), tape libraries and virtual tape libraries.<br />Server Assets – to include individual servers, blade servers and enclosures.<br />Electrical Assets – to include Universal Power Supplies (UPS), Power Distribution Units (PDU), breakers, outlets (NEMA noted), circuit number and grid location of same. Power consumption is another example of logical asset that needs to be monitored by the data center manager in order to maximize server utilization and understand, if not reduce, associated costs.<br />Air Conditioning Assets – to include air conditioning units, air handlers, chiller plants and other airflow related equipment. Airflow in this instance may be considered a logical asset as well but the usage plays an important role in a data center environment. Rising energy costs and concerns about global warming require data center managers to track usage carefully. Computational fluid dynamics (CFD) modeling can serve as a tool for maximizing airflow within the data center.<br />Data Center Security and Safety Assets – Media access controllers, cameras, fire alarms, environmental surveillance, access control systems and access cards/devices, fire and life safety components, such as fire suppression systems.<br />Logical Assets – T1’s, PRI’s and other communication lines, air conditioning, electrical power usage. Most importantly in this logical realm is the management of the virtual environment. Following is a list of logical assets or associated attributes that would need to be tracked:<br />A list of Virtual Machines <br />Software licenses in use in data center<br />Virtual access to assets<br />VPN access accounts to data center<br />Server/asset accounts local to the asset<br />Information Assets – to include text, images, audio, video and other media. Information is probably the most important asset a data center manager is responsible for. The definition is: An information asset is a definable piece of information, stored in any manner, recognized as valuable to the organization. In order to achieve access users must have accurate, timely, secure and personalized access to this information.<br />The following are asset groupings to be considered in a management system:<br />By Security Level<br />Confidentiality<br />FERPA<br />HIPPA<br />PCI<br />By Support Organization<br />Departmental<br />Computer Center Supported<br />Project Team<br />Criticality<br />Critical (ex. 24x7 availability)<br />Business Hours only (ex. 8AM - 7 PM)<br />Noncritical<br />By Funding Source (useful for recurring costs)<br />Departmental funded<br />Project funded<br />Division funded<br />Tagging/Tracking<br />Licensing<br />Software Distribution<br />Problem Management<br />Problem Management investigates the underlying cause of incidents, and aims to prevent incidents of a similar nature from recurring. By removing errors, which often requires a structural change to the IT infrastructure in an organization, the number of incidents can be reduced over time. Problem Management should not be confused with Incident Management. Problem Management seeks to remove the causes of incidents permanently from the IT infrastructure whereas Incident Management deals with fighting symptoms to incidents. Problem Management is proactive while Incident Management is reactive.<br />Fault Detection - A condition often identified as a result of multiple incidents that exhibit common symptoms. Problems can also be identified from a single significant incident, indicative of a single error, for which the cause is unknown, but for which the impact is significant.<br />Correction - An iterative process to diagnose known errors until they are eliminated by the successful implementation of a change under the control of the Change Management process.<br />Reporting - Summarizes Problem Management activities. Includes number of repeat incidents, problems, open problems, repeat problems, etc.<br />Security<br />Data Security<br />Data security is the protection of data from accidental or malicious modification, destruction, or disclosure. Although the subject of data security is broad and multi-faceted, it should be an overriding concern in the design and operation of a data center. There are multiple laws, regulations and standards that are likely to be applicable such as the Payment Card Industry Data Security Standard, ISO 17799 Information Security Standard, California SB 1386, California AB 211, the California State University Information Security Policy and Standards to name a few. It is required to periodically prove compliance to these standards and laws.<br />Encryption<br />Encryption is the use of an algorithm to encode data in order to render a message or other file readable only for the intended recipient. Its primary functions are to ensure non-repudiation, integrity, and confidentiality in both data transmission and data storage. The use of encryption is especially important for Protected Data (data classified as Level 1 or 2). Common transmission encryption protocols and utilities include SSL/TLS, SecureShell, and IPSec. Encrypted Data Storage programs include PGP's encryption products (other security vendors such as McAfee have products in this space as well), encrypted USB keys, and TrueCrypt's free encryption software. Key management (exchange of keys, protection of keys, and key recovery) should be carefully considered.<br />Authentication<br />Authentication is the verification of the identity of a user. From a security perspective it is important that user identification be unique so that each person can be positively identified. Also the process of issuing identifiers must be secure and documented. There are three types of authentication available:<br />What a person knows (e.g., password or passphrase)<br />What a person has (e.g., smart card or token)<br />What a person is or does (e.g., biometrics or keystroke dynamics)<br />Single-factor authentication is the use one of the above authentication types, two-factor authentication uses two of them, and three-factor authentication uses all of them. <br />Single-factor password authentication remains the most common means of authentication (quot;
What a person knowsquot;
). However due to the computing power of modern computers in the hands of attackers and technologies such as quot;
rainbow tablesquot;
, passwords used for single factor authentication may soon outlive their usefulness. Strong passwords should be used and a password should never be transmitted or stored without being encrypted. A reasonably strong password would be a minimum of eight characters and should contain three of the following four character types: lower case alpha, upper case alpha, number, and special character.<br />Vulnerability Management<br />Anti-malware Protection<br />Malware (malicious code, such as viruses, worms, and spyware, written to circumvent the security policy of a computer) represents a threat to data center operations. Anti malware solutions must be deployed on all operating system platforms to detect and reduce the risk to an acceptable level. Solutions for malware infection attacks include firewalls (host and network), antivirus/anti-spyware, host/network intrusion protection systems, OS/Application hardening and patching. Relying on only anti virus solutions will not fully protect a computer from malware. Determining the correct mix and configuration of the anti-malware solutions depends on the value and type of services provided by a server. Anti virus, firewalls, and intrusion protection systems need to be regularly updated in order to respond to current threats.<br />Patching<br />The ongoing patching of operating systems and applications are important activities in vulnerability management. Patching includes file updates and configuration alterations. Data Center Operations groups should implement a patching program designed to monitor available patches, categorize, test, implement, and monitor the deployment of OS and application patches. In order to detect and address emerging vulnerabilities in a timely manner, campus staff members should frequently monitor announcements from sources such as BugTraq, REN-ISAC, US-Cert, and Microsoft and then take appropriate action. Both timely patch deployment and patch testing are important and should be thoughtfully balanced. Patches should be applied via a change control process. The ability to undo patches is highly desirable in case unexpected consequences are encountered. Also the capability to verify that patches were successfully applied is important.<br />Vulnerability Scanning<br />The datacenter should implement a vulnerability scanning program such as regular use of McAfee’s Foundstone.<br />Compliance Reporting<br />Compliance Reporting informs all parties with responsibility for the data and applications how well risks are reduced to an acceptable level as defined by policy, standards, and procedures. Compliance reporting is also valuable in proving compliance to applicable laws and contracts (HIPAA, PCI DSS, etc.). Compliance reporting should include measures on:<br />How many systems are out of compliance.<br />Percentage of compliant/non-compliant systems.<br />Once detected out of compliance, how quickly a system comes into compliance.<br />Compliance trends over time.<br />Physical Security<br />When planning for security around your Data Center and the equipment contained therein, physical security must be part of the equation. This would be part of a quot;
Defense-in-Depthquot;
security model. If physical security of critical IT equipment isn't addressed, it doesn't matter how long your passwords are, or what method of encryption you are using on your network - once an attacker has gained physical access to your systems, not much else matters.<br />See section 2.4.1.6 for description of access control<br /><insert diagram of reference model with key components as building blocks><br />Best Practice Components<br />Standards<br />ITIL<br />The Information Technology Infrastructure Library (ITIL) Version 3 is a collection of good practices for the management of Information Technology organizations. It consists of five components whose central theme is the management of IT services. The five components are Service Strategy (SS), Service Design (SD), Service Transition (ST), Service Operations (SO), and Service Continuous Improvement (SCI). Together these five components define the ITIL life cycle with the first four components (SS, SD, ST and SO) at the core with SCI overarching the first four components. SCI wraps the first four components and depicts the necessary concern of each of the core components to continuously look for ways to improve the respective ITIL process. <br />ITIL defines the five components in terms of functions/activities, concepts, and processes, as illustrated below:<br />Service Strategy <br />Main ActivitiesKey ConceptsProcessesDefine the MarketUtility & WarrantyService Portfolio ManagementDevelop OfferingsValue CreationDemand ManagementDevelop Strategic AssetsService ProviderFinancial ManagementPrepare ExecutionService ModelService Portfolio<br />Service Design<br />Five Aspects of SDKey ConceptsProcessesService SolutionFour “P’s”: People, Processes, Products, & PartnersService Catalog ManagementService Management Systems and SolutionsService Design PackageService Level ManagementTechnology and Management Architectures & ToolsDelivery Model OptionsAvailability ManagementProcessesService Level AgreementCapacity ManagementMeasurement Systems, Methods & MetricsOperational Level AgreementIT Service Continuity ManagementUnderpinning ContractInformation Security ManagementSupplier Management<br />Service Transition<br />ProcessesKey ConceptsChange ManagementService ChangesService Asset & Configuration ManagementRequest for ChangeRelease & Deployment ManagementSeven “R’s” of Change ManagementKnowledge ManagementChange TypesTransition Planning & SupportRelease UnitService Validation & TestingConfiguration Management Database (CMDB)EvaluationConfiguration Management SystemDefinitive Media Library (DML)<br />Service Operation<br />Achieving the Right BalanceProcessesFunctionInternal IT View versus External Business ViewEvent ManagementService DeskStability versus ResponsivenessIncident ManagementTechnical ManagementReactive versus ProactiveProblem ManagementIT Operations ManagementQuality versus CostAccess ManagementApplication ManagementRequest Fulfillment<br />Service Continuous Improvement<br />The 7 Step Improvement Process to identify vision and strategy, tactical and operational goalsDefine what you should measureDefine what you can measureGather the data. Who? How? When? Integrity of the data?Process the data. Frequency, format, system, accuracy.Analyze the data. Relationships, trends, according to plan, targets met, corrective actions?Present and use the information assessment summary action plans, etc.Implement corrective action.<br />ASHRAE<br />ASHRAE modified their operational envelope for data centers with the goal of reducing energy consumption. For extended periods of time, the IT manufacturers recommend that data center operators maintain their environment within the recommended envelope. Exceeding the recommended limits for short periods of time should not be a problem, but running near the allowable limits for months could result in increased reliability issues. In reviewing the available data from a number of IT manufacturers the 2008 expanded recommended operating envelope is the agreed-upon envelope that is acceptable to all the IT manufacturers, and operation within this envelope will not compromise overall reliability of the IT equipment.<br />Following are the previous and 2008 recommended envelope data:<br />2004 Version2008 VersionLow End Temperature20°C (68 °F)18°C (64.4 °F)High End Temperature25°C (77 °F)27°C (80.6 °F)Low End Moisture40% RH5.5°C DP (41.9 °F)High End Moisture55% RH60% RH & 15°C DP (59 °F DP)<br /><Additional comments on the relationship of electro static discharge (ESD) and relative humidity and the impact to printed circuit board (PCB) electronics and component lubricants in drive motors for disk and tape.><br />Uptime Institute<br />Hardware Platforms<br />Servers<br />Server Virtualization<br />Practices<br />Production hardware should run the latest stable release of the selected hypervisor, with patching and upgrade paths defined and pursued on a scheduled basis with each hardware element (e.g. blade) dual-attached to the data network and storage environment to provide for load balancing and fault tolerance.<br />Virtual machine templates should be developed, tested and maintained to allow for consistent OS, maintenance and middleware levels across production instances. These templates should be used to support cloning of new instances as required and systematic maintenance of production instances as needed. <br />Virtual machines should be provisioned using a defined work order process that allows for an effective understanding of server requirements and billing/accounting expectations.<br />This process should allow for interaction between requestor and provider to ensure appropriate configuration and acceptance of any fee-for-service arrangements.<br />Virtual machines should be monitored for CPU, memory, network and disk usage. Configurations should be modified, with service owning unit participation, to ensure an optimum balance between required and committed capacity.<br />Post-provisioning capacity analysis should be performed via a formal, documented process. For example, a 4 VCPU virtual machine with 8 gigabytes of RAM that is using less than 10% of 1 VCPU and 500 megabytes of RAM should be adjusted to ensure that resources are not wasted. This process should be formal, documented and performed on a frequent basis.<br />Virtual machine boot/system disks should be provisioned into a LUN maintained in the storage environment to ensure portability of server instances across hardware elements. <br />To reduce I/O contention, virtual machines with high performance or high capacity requirements should have their non-boot/system disks provisioned using dedica