5. Efficient Data Management is Mission Critical Operating Systems may change and evolve Application vendors may come and go Servers are being virtualized and democratized Data is persistent and the immutable and ever-present corporate asset Data is the Vital Center
8. Efficient Data Management is Mission Critical Average Large Enterprise 3.5 Petabytes Average Medium Enterprise 350 Terabytes Virtual Server – Physical Bottleneck
9. Backup is Broken Production system impact Shrinking backup window Complex tape management processes High costs of replication and/or tape shipment Storage vendor lock-in RTO from 2 to 24 hours RPO 18+ hours Cost, cost, cost! CPU load impact
10.
11. Poor data lifecycle managementB c Exploding Storage Growth Rethinking Data Protection Customer Challenges
12. Efficient Data Management is Mission Critical Transforming the once a day process into a continuous process Virtual Server – Backup Transformation
20. Application-centric Protection Why is application integration important? Because when you recover, you want your data to look like this And not like this
21. Giving Names to Eggs Crash Consistent Data Transactionally Complete Data
25. Snapshot Sequence of Events ESX server SD puts the application into a hot backup mode Application holds up processing, flushes cache Application agent informs file system agent Agent ensures file system cache flushed to disk SD initiates VMware snapshot Transactionally complete snapshot taken across the virtual LUN All systems released VMware Service Console 3 Snapshot Director Virtual Appliance 1 Virtual Machine Virtual Machine Virtual Machine Application Application Application Application Snapshot Agent Application Snapshot Agent Application Snapshot Agent 2 File system Agent File system Agent File system Agent 4 FalconStor CDP OS OS OS
35. Virtual disk assignments are automatically coordinated with associated virtual machines with the help of FalconStor SRA
36. Disaster recovery scenarios can be tested at any timeA TimeView™ snapshot image is created and mounted to a virtual machine being tested The virtual machine is started Recovery is validated and the TimeView is image removed
37. Key Components of Site Recovery Manager integration with FalconStor DR Solutions VMware Virtual Center FalconStor Storage Replication Adapter coordinates with Site Recovery Manager, installs in Virtual Center Service Console Snapshot Director Virtual Machine Virtual Machine Virtual Machine Application Application Application Site Recovery Manager Storage Replication Adapter Snapshot Director Application Snapshot Agent Application Snapshot Agent Application Snapshot Agent ESX Application and file system agents File system Agent File system Agent File system Agent FalconStor CDP provides replication services coordinated with Site Recovery Manager FalconStor CDP OS OS OS
38.
39.
40.
41. Heterogeneous Replication – Any to Any Disk Primary Site Remote Site Management data VMware vCenter Server with Site Recovery Manager VMware vCenter Server with Site Recovery Manager VMware ESX Server VMware ESX Server VMware ESX Server VMware ESX Server Heterogeneous Replication With Auto-failback Support Any storage Disk Vendor B Any storage Disk Vendor A
53. FalconStor CDP scans the changes at a 512 byte level and replicates only the changed dataAvoids constant re-sending of duplicate data = data written by application or file system = sector with actual data change 512 byte Disk sectors A typical 8k write of data. Many systems send all 8k or more. FalconStor CDP sends only the truly changed sectors.
An important component of the backup process is the backup Agent that resides on the application host to send data over the LAN to the backup server. This method of data copy has multiple drawbacks, including performance impact on production systems and slow backup performance due to the bandwidth that we will be discussing later during this presentation.But mainly, the increasing amount of digital data and the global nature of our economies are exercising additional pressures on the ever shrinking backup window, which is the interval of time where the load on production servers is minimal and the backup process can take place without affecting production environments or ongoing business processes. This typically takes place after business hours.This daily tape backup process also creates a lot of complexity when it comes to managing multiple tapes and the retention schedules or the rotation of those tapes. This process is error prone and strains a lot of resources and time.Another challenge is the fact that it’s hard to verify the validity of the backup job unless you remount the tape to check the integrity of the data on it. The resource requirements for this verification mean that it tends to be performed only rarely.The last challenge is the recovery process and the extended time of the recovery itself, as well as the recovery point objective attached to tape backups or backups in general. It is very common to see recovery times extending beyond 24 hours and recovery points from at least the last day’s backup.
What else can we do to protect data? There are a number of things we can do at the storage level to increase data protection.
This diagram maps out the different components of the solution so far. Next we’ll look into specifics of how it works. At the top of the diagram, we see that ASD installs into the VMware Service Console on the ESX server. Under that we have individual virtual machines. Within those machines there are two agents. One, if needed, is the application-specific snapshot agent, for applications such as Exchange, SQL, Oracle, DB2, etc. This Agent makes sure the application is put into a backup mode and transactions are written to disk. Also in the VM is a file system agent, for Windows or Linux File system. This is needed to make sure the file system level is also in a backup mode and transactions are flushed out of cache. Everything needs to be stable on the disk before the snapshot is made. At the bottom is FalconStor CDP, which provides the online backup repository for ESX.
This slide outlines the steps involved in the taking the snapshot. The snapshot request is initiated from CDP. This is normally a scheduled process, but it can be manually activated as well. The Application Snapshot Director receives the request and then informs the application snapshot agent running within the virtualized application to put the application into a backup mode. Once the application is in backup mode, the file system is notified in order to flush out any data that may still be in cache or in transit. When the VM is fully in backup mode, the next request goes to the ESX server which uses VMware snapshot technology to place the full ESX sever into a backup mode. At this point, the entire system is static, all transactions written to the backup disk The fourth step is to take the snapshot at the storage level. This entire process takes only a few seconds. It takes long to explain it than it does to happen.After the snapshot is done, all systems are released and go back to normal mode. A key benefit of this method is that the VMware snapshot is used only for a few seconds, rather than for the length of the backup process. Recall what we said earlier about the impact of VMware snapshots. The FalconStor method greatly reduces this impact.
FalconStor provides a Storage Replication Adapter, which is an integration tool for integrating with VMware Site Recovery Manager. This is not the place for a full discussion of SRM, but we highly recommend taking a look at SRM as it is a very good tool for managing the replication process. SRM is a management tool. It does not move data. FalconStor CDP provides the actual data movement, coordinated with SRM. A very good feature of SRM is the ability to run disaster recovery scenarios. You can test your DR setup to make sure that applications will start as expected. All this is done without breaking the replication process.
This diagram combines all the pieces we have seen. The top portion is the only new piece. Here we see how the FalconStor SRA installs along with SRM in the VMware Virtual Center (a separate system).
FalconStor NSS uses patented MicroScan replication. This is the most efficient replication model on the market, because it is the most efficient possible form of replication. When applications or file systems write data, they often write far more data than actually changes. For example, a file system may write 8k of data at a minimum, no matter how much data actually changes. In other words, the same content is often re-written to the same storage sectors. Many replication tools will copy all the data that changes. In this illustration, the colored blocks show a typical 8k write by a file system. However, within that 8k of data, only a few sectors may actually change, as shown by the blue squares. NSS scans the disk at the 512 byte level to identify the sectors with NEW data and it copies ONLY those sectors over the network. This can dramatically reduce the amount of data being replicated.
This slide show some real world data from an Exchange environment. It consists of 1300 Exchange users across four storage groups. The data is replicated hourly from California to Pennsylvania. Microscan is particularly effective with Exchange because of the random-write nature of Exchange data. If you look at the chart, in the second column we see the amount of data changes accumulated by Exchange, as measure at the application level. The next column shows those same sectors after being filtered by MicroScan. On average, Microscan replicated only between 2 and 3% of the total data change recorded, because that represented the actual NEW data. Overall, WAN traffic was reduced by 97%. Not all applications will have quite this level of WAN reduction. It does depend on the nature of the application and how it writes data. But typically, we see reductions on the order of 80-90% or more.