I like to talk a little about expectations because everyone does something for a reason and they expect certain business benefits to come out of their actions. You may have had to provide your management with a cost / benefit analysis to approve you spending your time and purchasing the Hypervisor software licenses to use in production.
The first thing people look for is often reductions in operating expenses. Less money spent on physical hardware acquisition and maintenance because you consolidated several physical servers onto one physical server running several virtual servers. This also saves on your monthly electric bill in two ways – First there is a server power usage savings having unplugged several physical servers. Second there is a power savings in your air conditioning costs since you now have less servers generating heat in your computer room. You may also see a reduction in your data center costs if you have to pay for rack space as you will need less of that moving forward.
Some people may look for savings in Administration effort saying virtual servers are easier to manage than physical servers. This is not necessarily always the case as virtual servers require new ways of managing and protecting. Your administrators will all have to be trained how to handle issues with virtual servers now. Training is very important as you can’t always be available to handle all the issues that could arise and are unique to virtual servers. You will also need the right tools in place to better track and manage your virtual and physical servers to help you more efficiently manage them. We will talk more about this later on.
Now lets look at a few industry best practices to keep your newly virtualized environment running smoothly. The first and foremost best practice is to make sure that your virtual servers are backed up and that your data is protected and recoverable. Although this may sound basic, some people think that since Hypervisors provide high availability for their virtual servers, they don’t need to perform backups. Most backup systems will allow you to put a backup client or agent on the virtual server and back it up as if it were any other physical server. The negative to this approach is that you could have several virtual server backups running at the same time and that would put an excessive load on the physical server hosting those virtual servers. A much more elegant, and Hypervisor vendor recommended approach, is to use a backup system that uses the backup API provided by the Hypervisor vendor. These APIs manage the backup process and make sure that the supporting hardware resources are not being over taxed during a backup process. They also provide additional functionality that enables backup vendors to rapidly move data for backup and restore and eliminate the need to use temporary storage for backup data.
There are products on the market that are specifically focused on backing up only virtualized servers. Although these products may sound like the best backup solution for virtual servers, the fact that they are a point solution causes you to have multiple backup applications, one for physical and one for virtual servers and this just increases the complexity of your environment while also adding cost through additional software acquisition and maintenance and additional training costs. A far better solution is to look for a backup application that provides solid reliable backup across your entire environment for all your servers, physical and virtual. This helps to simplify your environment, reduce your management tasks and help keep costs down so you can reap the savings associated with virtualizing your servers. Most long-standing backup application vendors have integrated support for hypervisor APIs to provide the recommended backup and restore methodologies for virtual servers so there is no reason not to use a single backup system across your environment.
When you decide to migrate an application to a virtual server or decide to setup a virtual server as a Disaster Recovery server, you should test your application in advance to make sure there are no hidden issues. Testing can be easily performed by using the backup and recovery capability found in your backup system. Simply backup the production server, application and data and perform a bare-metal recovery to the virtual server. This will copy the complete server image over and allow you to run the application against a copy of the production data. Do be careful as this is a real copy of your production server and it will perform automated functions that may be programmed into it.
Alternatively, there are high availability applications that will let you start up the application on a virtual replica server and run the application with a copy of the production data. This also gives you the ability to test and make sure everything is in place for a successful migration from physical to virtual servers.
However you setup your test environment, you should be sure to check the new virtual server for any license issues as well as understand what demands the application will put on the system resources. This also gives you an opportunity to check for application performance capabilities on the virtualized server so you will know what to expect when you migrate over for production. Also, as a Disaster Recovery alternative, you may be willing to accept reduced performance on a remote virtual server until you are able to recover your primary production server. The important thing is to know what to expect before you actually make a switchover.
We talked earlier about the need to document and diagram your new environment so you can quickly understand which servers are physical and which are virtual and what applications they are running. This will greatly help you in the event of an unplanned outage to recover quickly. Now, I know this can be a tedious task that requires you to continually remember to update your documents and diagrams as changes occur in your environment – and something we all like to procrastinate with. However, the real solution here is not to do this manually but rather use a set of management tools to do this automatically for you.
Tools are available from the Hypervisor vendors but they are limited in scope and only cover their own virtual servers. If you are like most and are trying several hypervisors out, you will not be able to have a single report across all your servers. You would be better off with a product that provides information across all the servers in your environment, physical and virtual and whether it is a Microsoft Hyper-v, VMware, or another virtual server. Best practices are to look for a comprehensive product that covers not only the servers but also the storage, network and components on the servers such as memory, CPU and OS levels and patches. This may sound like an expensive proposition but look for backup vendors who provide this as part of their backup application so you don’t need to buy anything else.
The fourth best practice is to be careful when cloning or provisioning new virtual servers. This sounds simple but, because virtual servers are so easy to create that, often times, people end up with many more than they really need. It is easy to say, “well, I just want to test this one thing out so I will create a quick clone to use to test.” Before you know it, you have an awful lot of “clones” out there and they are all active. The issue is that each VM takes up hardware resources and software licenses that could be used by production systems. You then have to go thru each server understand what it is being used for and if you can de-provision it.
Some might think this is only a problem for larger environments however smaller environments will actually “feel the pinch” sooner as they have fewer physical resources available and less time to spend managing their virtual server environments.
Remark that now we do support Hyper-v
Same Simple solution for both hypervisors hyper-v and VMware
Same console for virtual and physical server
If someone ask about:
cluster support for Hyper-v : answer: YES, with the UDP premium edition you will able to protect Hyper-v cluster’s
VIX : Our strategy is aligned with vmware…”
How host-Based Backup works?
For Host-Based Agentless backup, a “proxy” is required
The proxy will run the actual backup process
Act as the agent for the backup
The “agent” deduplicates data while sending to the RPS
The proxy can be:
Running on the RPS server
Completely separate from the hypervisor
A virtual guest
Running on the Hyper-V “parent partition” (physical)
Demos Steps:
Review the Environment
New Plan
Create a host based backup Task
Add VMs from Hyper-v
Add VMs From vSphere
Comment that it’s the same procedure than for Physiscal Machines
Enter Manager as Proxy and explain about it
Select destination UDP RPS Manager
Select DS1
Enter Pass: 1234
Go to schedule show the advanced options (keep default)
Talk about the Recovery points…. As we are ussing deduplication, we may keep more Recovery points than before.
Go to Advanced and select “generate file system catalog…”
Save the plan and Run a BKP
Show the new groups created on the Resources -> Nodes
Restore Option…..
Restore from Windows explorer SAME as UDP for physiscal servers
From the GUI (right clik on the VM from “resources”:
“browse files and folders”
“Recover VM” show that we can restore to a Different location, but SAME hypervisor type…. BUT::::
We can use for example BMR or Virtual Standby in order to convert recovery point to MS Hyper-v
Also I’ll menthion that we can protect also ESXi FREE editions intalling UDP Agent within each guest VM
Same for Virtual and physical servers
Cross hypervisor conversion
Demo
Create a new Plan
Task 1 Agent less for one VM from Hyper-v
Then add new Task for Virtual Stanby on the vsphere
(I should have an already prepared and working plan doing:
Agent lees Backup –> Replicated to RPS-R -> Virtual Stand by -> replicate to Remote RPS::
:: on the Remote RPS -> also have a Virtual Standby task after the “Replicate From task”
During Virtual Stand by Demo, at the end, show super quickly Full System HA integration (for those that will not attend the session 3)
Show RPS Jumpstart as a nice utility for UDP running on the cloud.-
Tentative, just in case someone ask for more details
Now lets look at a few industry best practices to keep your newly virtualized environment running smoothly. The first and foremost best practice is to make sure that your virtual servers are backed up and that your data is protected and recoverable. Although this may sound basic, some people think that since Hypervisors provide high availability for their virtual servers, they don’t need to perform backups. Most backup systems will allow you to put a backup client or agent on the virtual server and back it up as if it were any other physical server. The negative to this approach is that you could have several virtual server backups running at the same time and that would put an excessive load on the physical server hosting those virtual servers. A much more elegant, and Hypervisor vendor recommended approach, is to use a backup system that uses the backup API provided by the Hypervisor vendor. These APIs manage the backup process and make sure that the supporting hardware resources are not being over taxed during a backup process. They also provide additional functionality that enables backup vendors to rapidly move data for backup and restore and eliminate the need to use temporary storage for backup data.
There are products on the market that are specifically focused on backing up only virtualized servers. Although these products may sound like the best backup solution for virtual servers, the fact that they are a point solution causes you to have multiple backup applications, one for physical and one for virtual servers and this just increases the complexity of your environment while also adding cost through additional software acquisition and maintenance and additional training costs. A far better solution is to look for a backup application that provides solid reliable backup across your entire environment for all your servers, physical and virtual. This helps to simplify your environment, reduce your management tasks and help keep costs down so you can reap the savings associated with virtualizing your servers. Most long-standing backup application vendors have integrated support for hypervisor APIs to provide the recommended backup and restore methodologies for virtual servers so there is no reason not to use a single backup system across your environment.
When you decide to migrate an application to a virtual server or decide to setup a virtual server as a Disaster Recovery server, you should test your application in advance to make sure there are no hidden issues. Testing can be easily performed by using the backup and recovery capability found in your backup system. Simply backup the production server, application and data and perform a bare-metal recovery to the virtual server. This will copy the complete server image over and allow you to run the application against a copy of the production data. Do be careful as this is a real copy of your production server and it will perform automated functions that may be programmed into it.
Alternatively, there are high availability applications that will let you start up the application on a virtual replica server and run the application with a copy of the production data. This also gives you the ability to test and make sure everything is in place for a successful migration from physical to virtual servers.
However you setup your test environment, you should be sure to check the new virtual server for any license issues as well as understand what demands the application will put on the system resources. This also gives you an opportunity to check for application performance capabilities on the virtualized server so you will know what to expect when you migrate over for production. Also, as a Disaster Recovery alternative, you may be willing to accept reduced performance on a remote virtual server until you are able to recover your primary production server. The important thing is to know what to expect before you actually make a switchover.
We talked earlier about the need to document and diagram your new environment so you can quickly understand which servers are physical and which are virtual and what applications they are running. This will greatly help you in the event of an unplanned outage to recover quickly. Now, I know this can be a tedious task that requires you to continually remember to update your documents and diagrams as changes occur in your environment – and something we all like to procrastinate with. However, the real solution here is not to do this manually but rather use a set of management tools to do this automatically for you.
Tools are available from the Hypervisor vendors but they are limited in scope and only cover their own virtual servers. If you are like most and are trying several hypervisors out, you will not be able to have a single report across all your servers. You would be better off with a product that provides information across all the servers in your environment, physical and virtual and whether it is a Microsoft Hyper-v, VMware, or another virtual server. Best practices are to look for a comprehensive product that covers not only the servers but also the storage, network and components on the servers such as memory, CPU and OS levels and patches. This may sound like an expensive proposition but look for backup vendors who provide this as part of their backup application so you don’t need to buy anything else.
The fourth best practice is to be careful when cloning or provisioning new virtual servers. This sounds simple but, because virtual servers are so easy to create that, often times, people end up with many more than they really need. It is easy to say, “well, I just want to test this one thing out so I will create a quick clone to use to test.” Before you know it, you have an awful lot of “clones” out there and they are all active. The issue is that each VM takes up hardware resources and software licenses that could be used by production systems. You then have to go thru each server understand what it is being used for and if you can de-provision it.
Some might think this is only a problem for larger environments however smaller environments will actually “feel the pinch” sooner as they have fewer physical resources available and less time to spend managing their virtual server environments.
Now lets look at a few industry best practices to keep your newly virtualized environment running smoothly. The first and foremost best practice is to make sure that your virtual servers are backed up and that your data is protected and recoverable. Although this may sound basic, some people think that since Hypervisors provide high availability for their virtual servers, they don’t need to perform backups. Most backup systems will allow you to put a backup client or agent on the virtual server and back it up as if it were any other physical server. The negative to this approach is that you could have several virtual server backups running at the same time and that would put an excessive load on the physical server hosting those virtual servers. A much more elegant, and Hypervisor vendor recommended approach, is to use a backup system that uses the backup API provided by the Hypervisor vendor. These APIs manage the backup process and make sure that the supporting hardware resources are not being over taxed during a backup process. They also provide additional functionality that enables backup vendors to rapidly move data for backup and restore and eliminate the need to use temporary storage for backup data.
There are products on the market that are specifically focused on backing up only virtualized servers. Although these products may sound like the best backup solution for virtual servers, the fact that they are a point solution causes you to have multiple backup applications, one for physical and one for virtual servers and this just increases the complexity of your environment while also adding cost through additional software acquisition and maintenance and additional training costs. A far better solution is to look for a backup application that provides solid reliable backup across your entire environment for all your servers, physical and virtual. This helps to simplify your environment, reduce your management tasks and help keep costs down so you can reap the savings associated with virtualizing your servers. Most long-standing backup application vendors have integrated support for hypervisor APIs to provide the recommended backup and restore methodologies for virtual servers so there is no reason not to use a single backup system across your environment.
When you decide to migrate an application to a virtual server or decide to setup a virtual server as a Disaster Recovery server, you should test your application in advance to make sure there are no hidden issues. Testing can be easily performed by using the backup and recovery capability found in your backup system. Simply backup the production server, application and data and perform a bare-metal recovery to the virtual server. This will copy the complete server image over and allow you to run the application against a copy of the production data. Do be careful as this is a real copy of your production server and it will perform automated functions that may be programmed into it.
Alternatively, there are high availability applications that will let you start up the application on a virtual replica server and run the application with a copy of the production data. This also gives you the ability to test and make sure everything is in place for a successful migration from physical to virtual servers.
However you setup your test environment, you should be sure to check the new virtual server for any license issues as well as understand what demands the application will put on the system resources. This also gives you an opportunity to check for application performance capabilities on the virtualized server so you will know what to expect when you migrate over for production. Also, as a Disaster Recovery alternative, you may be willing to accept reduced performance on a remote virtual server until you are able to recover your primary production server. The important thing is to know what to expect before you actually make a switchover.
We talked earlier about the need to document and diagram your new environment so you can quickly understand which servers are physical and which are virtual and what applications they are running. This will greatly help you in the event of an unplanned outage to recover quickly. Now, I know this can be a tedious task that requires you to continually remember to update your documents and diagrams as changes occur in your environment – and something we all like to procrastinate with. However, the real solution here is not to do this manually but rather use a set of management tools to do this automatically for you.
Tools are available from the Hypervisor vendors but they are limited in scope and only cover their own virtual servers. If you are like most and are trying several hypervisors out, you will not be able to have a single report across all your servers. You would be better off with a product that provides information across all the servers in your environment, physical and virtual and whether it is a Microsoft Hyper-v, VMware, or another virtual server. Best practices are to look for a comprehensive product that covers not only the servers but also the storage, network and components on the servers such as memory, CPU and OS levels and patches. This may sound like an expensive proposition but look for backup vendors who provide this as part of their backup application so you don’t need to buy anything else.
The fourth best practice is to be careful when cloning or provisioning new virtual servers. This sounds simple but, because virtual servers are so easy to create that, often times, people end up with many more than they really need. It is easy to say, “well, I just want to test this one thing out so I will create a quick clone to use to test.” Before you know it, you have an awful lot of “clones” out there and they are all active. The issue is that each VM takes up hardware resources and software licenses that could be used by production systems. You then have to go thru each server understand what it is being used for and if you can de-provision it.
Some might think this is only a problem for larger environments however smaller environments will actually “feel the pinch” sooner as they have fewer physical resources available and less time to spend managing their virtual server environments.
.
If someone ask about:
Cluster support for Hyper-v : answer: YES, with the UDP premium edition you will able to protect Hyper-v cluster’s
VIX : Our strategy is aligned with vmware…”