This presentation covers the idea of logical hostname feature and its possible use case with E-Business Suite, why it is a must-have configuration for DR, how it can improve your test/dev instance cloning and lifecycle processes, especially in a cloud deployment, support overview by 11i/R12.0/R12.1, and why it is a very hot topic right now for R12.2. Additionally, we will describe possible advanced configuration scenarios like container based virtualization. The content is based on real client environment implementation experience.
Proofreading- Basics to Artificial Intelligence Integration - Presentation:Sl...
Optimize DR and Cloning with Logical Hostnames in Oracle E-Business Suite (OAUG Collaborate 2019 edition)
1. Session ID:
Prepared by:
Remember to complete your evaluation for this session within the app!
10685
Logical Hostnames in
Oracle E-Business Suite
Optimize DR and cloning
processes.
April 10th, 2019
ANDREJS PROKOPJEVS
Lead Applications Database Consultant
Pythian
@aprokopjevs
2. About Andrejs
Apps DBA from Riga, Latvia.
Speaking SQL since 2001. In Oracle world since 2004.
Boiling Oracle EBS since 2006.
Conference speaker:
UKOUG, nlOUG, DOAG, OAUG Collaborate
UKOUG 2017 Speaker Award winner
Andrejs Prokopjevs
Lead Applications Database Consultant
At Pythian since 2011
@aprokopjevs
prokopjevs@pythian.com
https://www.pythian.com/blog/author/prokopjevs/
3. Infrastructure: Transforming and
managing the IT infrastructure that
supports the business
DevOps: Providing critical velocity
in software deployment by adopting
DevOps practices
Cloud: Using the disruptive
nature of cloud for accelerated, cost-
effective growth
Databases: Ensuring databases
are reliable, secure, available and
continuously optimized
Big Data: Harnessing the transformative power of
data on a massive scale
Advanced Analytics: Mining data for
insights & business transformation
using data science
4. Agenda
• What is a logical hostname?
• Why it should be an important piece of the DR strategy?
• How cloning process can be improved?
• Support for 11i, R12, and why it is very hot now for R12.2
• Advanced configurations – Linux Containers
6. What is a logical hostname?
• Logical = Virtual hostname
• Logical <> Physical hostname
• Virtual host or address is a logical mapping of your application for the end user
• Virtual host or address is a mapping of your application configuration
– Like Apache’s virtual hostname (<VirtualHost>)
7. Example #1
• Each application is running separate on a single or multiple servers in pool
server1.domain.com
app1.domain.com
app2.domain.com
app3.domain.com
server2.domain.com
app4.domain.com
app5.domain.com
app6.domain.com
10. Why it should be an
important piece of
the DR strategy?
11. Disaster Recovery in general
• Option #1: Active – Active
• Option #2: Active – Passive
• Transparency
– Preferred
– Minimum downtime
– Minimum data loss
• SLA
– Maximum switchover window
• Bi – Directional
• Switchover testing
– At least, twice per year
DR
ip-10-10-17-21
us-1.cloud.com
ERP
Production
ip-10-10-10-10
eu-1.cloud.com
ERP
12. DR – Oracle E-Business Suite – DB tier
• Physical standby
– $$$ (EE license for standby)
– Manual setup effort and
switchover
– Data loss – at least, one redo log
• Data Guard
– $$$ (EE license for standby)
– Gap detection and recovery
– Flexible and easy switchover
– Data loss depends on the DG
mode
• Active Data Guard
– $$$$$$ (additional option)
DR
ebsdb3.domain.com
PRODDR
Production - RAC
ebsdb1.domain.com
PROD1
ebsdb2.domain.com
PROD2
DGB
13. DR – Oracle E-Business Suite – Apps tier
• Active – Passive only
• APPS_BASE sync using rsync
– Most common example
• Switchover – in-place cloning
DRMulti-Node production
ebsapp1.domain.com
CM log/out
APPS_BASE
ebsapp3.domain.com
CM log/out
APPS_BASE
ebsapp2.domain.com
APPS_BASE
RSYNC
14. DR – Oracle E-Business Suite – The Challenge
• Why do we need that in-place cloning at all?
– EBS configuration is very depended on FND nodes setup (FND_NODES)
– R12.2: IP validation also happens by ADOP
– Host name returned by OS must match and DR nodes have a new set of hostnames
and IPs
• It takes time
– Up to 60 minutes and can be even longer
• Workarounds
– Well scripted and tested automated process, but still adcfgclone.pl takes time
– Have DR nodes already inside the primary configuration
• Must be maintained and patched at the same level as primary nodes (especially in R12.2)
• Reports of unavailable nodes in the stack (unless Active – Active)
• Still requires post steps (like relocation of the Concurrent Managers)
15. Having DR nodes already inside the primary configuration
Multi-Node production
ebsapp1.domain.com
CM log/out
APPS_BASE
ebsapp2.domain.com
APPS_BASE
Shared FS for
APPS_BASE
Shared FS for
CM log/out
DR
ebsapp3.domain.com
CM log/out
APPS_BASE
Shared FS for
APPS_BASE
Shared FS for
CM log/out
RSYNC
Own unique APPS_BASE.
Being maintained and patched
together with primary nodes.
CM relocation is manual by
updating the Target node for
each manager queue.
Shared FS and R12.2 –
NOT POSSIBLE!
Can’t have two masters.
Two separate shared file
systems, synchronized with
RSYNC between the sites.
Two separate shared file
systems, NOT synchronized
between the sites.
16. How it should look like?
Production
ebsapp1.domain.com
app.domain.com
CM log/out
APPS_BASE
DR
ebsapp3.domain.com
app.domain.com
CM log/out
APPS_BASE
RSYNC
Hi!
I am a logical hostname.
17. Benefits
• Forget about the cloning
• Forget about configuration post steps
• Just restart your applications
• 5-10 minutes
19. Cloning process
• Cloning using rapid clone
Run adpreclone.pl
prod1.domain.com
Apps Tier
prod2.domain.com
DB Tier
Source
Run adcfgclone.pl
test1.domain.com
Apps Tier
test2.domain.com
DB Tier
Target
COPY
20. Cloning process – detailed
• adpreclone.pl
– Creates cloning stage
– Retrieves database information and creates cloning scripts
• adcfgclone.pl
– Creates context file
– Configures the target, registers and relinks the Oracle Homes
– Optionally restarts the services
• Post steps
• Complexity depends on the stack size
21. Cloning process – challenges
• Why we can’t just copy the files?
– EBS configuration, FND tables, Profile options
– Concurrent Managers validate the server hostname
– Multi node, Load balancer, OID/SSO integrations
– ... a lot more
• How much time a typical cloning task takes?
– Few hours to days depending on architecture and resources
• Workarounds
– Automation (scripts, Ansible, etc), Applications Management Pack (EM)
– Storage level snapshots
22. How it can look like?
Source
prod3.domain.com
db1.domain.com
EBSDB
prod1.domain.com
app1.domain.com
APPS_BASE
prod2.domain.com
app2.domain.com
APPS_BASE
Target
test3.domain.com
db1.domain.com
EBSDB
test1.domain.com
app1.domain.com
APPS_BASE
test2.domain.com
app2.domain.com
APPS_BASE
COPY
COPY
Hi!
I am a logical hostname.
23. Benefits
• No need to execute common Source and Target steps
– Cloning is just a copy of source to target
– Any DB tier change (SID, host, port) is just a context update
– Reduces a lot of manual effort required
– Reduces the task duration to hours, or even minutes (cloud)
• But you still may need some extra manual tasks
– Update of Web Entry profile options
– Update of profile options or printer settings
– AutoConfig to apply the changes applied
– Change of the database and applications passwords
– Re-registration of OID/OAM if it is SSO integrated
– Customizations and interfaces
• Can be easily scripted and takes not too much time
24. Cloud provisioning example
• Create a VM Image / Template
• Provision from templates
• First-boot automation for post
configuration of the new target
Source
ebsapp.domain.com
APPS_BASE
Virtual Machine Repository
VIRTUAL MACHINE TEMPLATES
ebsdb.domain.com
DATABASE
New Target
ebsapp.domain.com
APPS_BASE
ebsdb.domain.com
DATABASE
26. Logical hostname support
• Release 11i / R12.0 / R12.1
– Only physical hostname is supported
– You have to configure logical name as a physical hostname on the server
– OS commands must return the hostname configured
• uname -n
• hostname
• Release R12.2
– Official support is implemented
– Configuration can be based on logical hostname
27. Logical hostname support – R12.2 requirements
• Patch level
– Patch 25178222:R12.AD.C.Delta.9
– Patch 25180736:R12.TXK.C.Delta.9
– Patch 17075726:R12.FND.C
• Alias in /etc/hosts
• Main context variables to be updated
– s_hostname
– s_%host
– WebLogic Domain server and machine records
• Optional context variables for Web Entry
– s_webentryurlprotocol, s_webentryhost, s_webentrydomain, s_active_webport
– s_login_page
– s_external_url
28. R12.2 – what is changed?
• Context name, s_hostname, s_%host context variables pointed to logical host value
• s_physical_hostname context variable is introduced
• $APPL_TOP/APPSPROD_ebsapp01.env
<host oa_var="s_hostname">ebsapp1</host>
<physical_hostname oa_var="s_physical_hostname">prod1</physical_hostname>
currentHost=`hostname|sed 's/..*//g'|tr "[A-Z]" "[a-z]"`
host=`echo "ebsapp1"|tr "[A-Z]" "[a-z]"`
physicalHost=`echo "prod1"|tr "[A-Z]" "[a-z]"`
if [ $host != $currentHost ] && [ $physicalHost != $currentHost ]
then
echo "ERROR: This env should be sourced from $host !"
exit_status=1
29. R12.2 – what is changed in Concurrent Manager startup?
• Patch 17075726
– Delivers changes to cpadmin.sh, reviver.sh, startmgr.sh and FNDLIBR code
• Changes in Concurrent Manager initialization
– "uname -n” is gone
– Startup is using the configured host value and passes it to the runtime as a parameter
$ ps -fu applmgr
...
FNDCRM apps/ABCDXXXXXXXXXYZ FND FNDCRM N 10 c LOCK Y EBSAPP1 2728881
...
/u01/vision/fs1/EBSapps/comn/util/jdk64/bin/java ...
-DEBS_HOSTNAME=ebsapp1 ... oracle.apps.fnd.cp.gsf.GSMServiceController
30. What to do with a release version below R12.2 TXK delta 9?
• Challenge
– Starting Concurrent Managers is the main problem!
– Target node is validated with server hostname.
– R12.2 has a FMW Home 11g stack that depends on hostname references.
31. Hostname fake
• How about to fake the hostname on your Linux server just for Oracle EBS
– fakehostname.c is created to replace gethostname() default OS call to return you the
desired hostname value from an environment variable
– fakehostname.c is compiled as 32-bit library
– For R12.2 you need both 32 and 64 bit libraries to be compiled
– Highly recommended not to do that and implement the official supported way
– LD_PRELOAD is set pointing to your compiled library
– This tweak can be added to your environment source (.bash_profile) before your EBS
environment is set
34. Linux Containers
• LXC virtualization
– Multiple Linux instances on a single host
• Shared kernel
– Not a classic virtual machine we used to
– Hardware is not emulated
– Isolated process tree of the OS, security, networking, and storage
• Available since RHEL / OEL 6 (2.6.24 kernel)
– Deprecated in RHEL 7 and replaced with LXD improved technology
– But still can be installed from EPEL repository
• Supports different Linux OS deployments, no lock to physical host
– RHEL 6 or 7, Debian, etc
– OEL5 guest support is also there, but only with Oracle’s UEK (good for 11i users)
35. Architecture with classic virtualization
Oracle E-Business Suite
KERNEL
HOST OS
HyperVisor
Binaries and Libraries Network Storage Security
Guest Operating System with emulated hardware
36. Architecture with LXC
Oracle E-Business Suite
KERNEL
HOST OS
Container engine
Binaries and Libraries of
emulated guest OS
Network Storage Security
38. Resource sharing and control
• CPU, IO, Memory resources can be limited and controlled.
• Examples:
[root@ebsapp1 ~]# lxc-cgroup -n app cpuset.cpus 0,1
[root@ebsapp1 ~]# lxc-cgroup -n app memory.limit_in_bytes 53687091
39. Use case for E-Business Suite
• Create a container for Apps tier
– Configure your logical hostname for the container “operating system”
– Configure EBS based on this logical hostname
– Actual for any release of EBS since it’s going to be a “physical” hostname of the
container
• DR, cloning and instance provisioning
– Rsync between containers
– Snapshot or keep in sync the container repository
– Create and transfer container templates
– Clone containers on the same host
• Officially it is certified only for R12.2 and using UEK kernel
– UEK R3 Quarterly Update 6 (3.8.13-98) or higher or UEK R4 (Doc ID 1330701.1)
40. Use case for E-Business Suite – DR example
Production
ebsapp1.domain.com
DR
ebsapp3.domain.com
RSYNC
Containers
app.domain.com
CM log/out
APPS_BASE
Containers
app.domain.com
CM log/out
APPS_BASE
41. Use case for E-Business Suite – Cloning example
Production
prod1.domain.com
Test
test1.domain.com
Template
or Copy
Containers
app.domain.com
CM log/out
APPS_BASE
Containers
app.domain.com
CM log/out
APPS_BASE
42. Linux Containers – The Future
• LXC is a deprecated local repository based deployment approach but still valid
• Domain based deployment is the future and becomes the trend now
• Solutions available
– LXD
– Docker
– Kubernetes
• Docker and Kubernetes container based services are already available for major Cloud
service providers like AWS, Azure, Google
• No official templates yet for Oracle E-Business Suite
• Make your own, deploy and test
• Can be considered for Test and Development lifecycle
• Supply Docker images to developers (just 8 GB of RAM and 500+ GB of space )
43. Summary
• Official logical hostname support is finally available but only for R12.2
• It can streamline the instance lifecycle management and rapid process of
cloning
• Easier instance provisioning from images (cloud deployment and automation)
• DR failover window is minimum that restart of the services just takes
• There are un-official options available for pre-R12.2 releases if required
• Container based deployment is a possible future to review
44. Session ID:
Remember to complete your evaluation for this session within the app!
10685
Thank you
prokopjevs@pythian.com
Hinweis der Redaktion
Good afternoon! Today we will talk about logical hostnames in Oracle E-Business Suite.
This topic is not difficult, but it is important to understand the concepts and, most important, the capabilities that this feature can give to us.
My name is Andrey, and I am an Apps DBA coming from Latvia having 14 years experience with Oracle.
Trying to participate and present at conferences, at least, once or twice per year.
Sometimes I do blogging too, but not often and only if there is something really hot.
A slide about the company where I am coming from.
Pythian is a company providing remote IT services in various areas, including Oracle databases and Oracle E-Business Suite DBA services.
We are 20 years in business, global having experts in 35 countries, and managing more than 350 customers to date.
What are we going to talk today about.
First of all what is a logical hostname in general. Why it should be an important piece of your DR strategy, and how cloning process can be improved.
I will show you the support details of this feature for the main E-Business Suite releases there on the market, and why it is very hot for R12.2.
We will also cover advanced configurations scenarios, like, as an example, using container based virtualization.
What is a logical hostname?
The most common association - is it like a virtual host definition in Apache?
Yes. Logical hostname is the same what is a Virtual hostname. It is not a physical hostname of the server your application is running on.
Virtual host or address is a logical mapping of your application for the end user, that a user is entering in his or her browser to access your web based application.
Also it is a mapping of your application configuration for the physical server it runs on. Exactly what Virtual host directive does for Apache like.
As a classical example, this is how we were running Web applications or sites 10 or even 20 years ago.
Each application is running separate on a single or multiple servers in pool. We have no clustering there.
And we could access the app by pointing the exact physical hostname or an DNS alias of the machine our application is running.
This example illustrates how we are doing it in the last 10 years – clustering.
We are running the same application on multiple nodes, and route or even load balance the traffic between the cluster nodes using an additional front-end layer (physical load balancer, content distribution endpoint, or a reverse proxy).
And this is how we are doing it now in the cloud.
We are running our apps on cloud provisioned instances, clustered, with easy failover between regions or just availability zones within the same region.
We can define an automatic scaling group to control our instance resources and engage more compute units based on the current workload requirements.
These instances can have your own naming policies, or defaults when physical hostnames are automatically set to values based on the internal IP address allocated, current region and availability zone without any association with your corporate domain name.
Is this somehow affecting the end user experience? No. End users just enter the required application address in their browsers and all this is absolutely transparent for them.
Let us talk about the Disaster Recovery.
How Disaster Recovery process should be set in general?
First is to define if it is an active/active deployment for you when both sides are started and running in parallel, or an active/passive deployment when you restart your DR instance only during the failover process.
Next is Transparency - this is what business and end users expect, at least, in the perfect world, right? Downtime and potential data loss should be as minimum as possible.
Next is the SLA - business must clearly plan and sign the maximum allowed downtime window for the switchover in case of a disaster situation.
And the switchover processes should be bi-directional, and tested, at least, twice per year.
DR process for an E-Business Suite database is more less well established by Oracle. Depending on business requirements and the budget we have various options available.
Physical standby is a manually managed solution, require a lot of effort and scripting to get it executed seamlessly. We may loose, at least, one redo log of data updates not shipped before the failover scenario is happening. Data Guard, if configured, automatically detects gaps and covers them. The switchover is quite flexible and easy in both directions. For Active-Active solution there is an Active Data Guard option available, but requires extra licensing. In other words – more money.
DR process for E-Business Suite Apps tier is a bit different. It is Active-Passive only.
The most common example is to put a rsync job and with a scheduled interval sync the APPS_BASE and any other related location with the other site.
The switchover is, basically, an in-place cloning process. And switchback is just another clone. So it is really challenging.
And now some details about the main challenges. Why do we need that in-place cloning at all? E-Business Suite configuration is very depended on the node name configuration that is stored in FND tables (like FND_NODES, or FND_CONCURRENT_QUEUE). You won’t get the services up unless hostname OS call returns you a value that matches the Applications setup. With R12.2 it’s even more – it’s tight to node name and IP address of that node, which is being validated by any ADOP call you run.
Basically, you need to follow a full in-place cloning process to get the new DR node set. Usually it takes up to 60 minutes to switchover. In some cases that can be even longer and is depended on your system setup how complex it is.
Are there any possible workarounds? Yes, there are - a well scripted and tested process. But the window still is, at least, 30 minutes, because adcfgclone.pl run takes time.
Another option is to pre-set the DR nodes in your configuration. But you’ll need to fully maintain DR node setup as these would be the same primary nodes. With R12.2 your DR nodes will be patched within the same single ADOP patch cycle. Having the DR nodes down your system will report unavailable nodes everywhere. Still some post steps are needed to relocate the concurrent managers.
It is a very complex workaround.
This is an illustration of that workaround. DR side should have a unique APPS_BASE that is being maintained together with the primary site. APPS_BASE file systems must not be synchronized between the sites as these contain unique instances that exist in parallel. With R12.1 we can apply separate shared file systems for primary and DR sites, but with R12.2 it is not possible because we can’t have two masters. So each node must utilize its own isolated APPS_BASE file system. Other file systems, like CM log/out or interfaces, can be shared and synchronized between the sites. Relocation of concurrent managers is a manual process updating the target nodes. As you can see, this is quite complex.
Here is how it should look like. We move an independent container of our E-Business Suite environment without any link to a physical host it is running on.
And we have a logical hostname here.
What are the benefits of this approach. Obviously, forget about the cloning process. Forget about configuration post steps. Just take and restart your applications. Usually it takes 5-10 minutes to complete this.
How cloning process can be improved?
This diagram is illustrating a classical method of E-Business Suite instance cloning using the Rapid clone.
We run the adpreclone.pl on the source, copy the source onto target and then run adcfgclone.pl to complete the cloning.
adpreclone.pl creates the stage and scripts by fetching the metadata from the database.
adcfgclone.pl creates the context file, configures the target instance and optionally restarts the services.
As we know, there is a number of post steps that Rapid clone is not covering. Complexity depends on the stack size and architecture, high availability, and integrations.
Why we can’t just copy the files? The same reason we have discussed before. There are hostname references inside the configuration and FND tables inside the database. Additionally, there is a number of Profile options that may references that as well. Concurrent managers validate the server hostname before starting services in the same way. Clustering, load balancing, OID/SSO integrations, and other configuration may differ between different environments.
How much time a typical cloning task takes? It depends on the architecture, resources and typically runs for few hours to days. Are there any workarounds to reduce this time? First that can help is Automation and limitation of the human involvement that can cause delays. Another possibility is the use of storage snapshots can improve the time spent on a copy process.
So how it can look like? In this example, source and target have the same set of multiple Apps tiers in the stack. During cloning we are just copying the source onto target. And the logical hostname is the key here.
Why I have the 2nd Apps tier for target in grey? This is to note that your target configuration may differ. And it is your choice if you need to use all the same nodes as source or not.
If not, obviously some configuration changes are required to remove or disable nodes that are not needed on the target side.
There is no need to execute our common known pre-steps and config wizard. Cloning can be just a copy process. If, for example, DB connection details change (like SID, hostname, port), these can be updated as a simple context file update plus AutoConfig. This reduces a lot of manual effort required, makes the automation easier, and cloning time can be reduced just to the copy effort, like few hours to minutes if deployed in the cloud.
But still you may need some extra tasks, like Web Entry profile updates, updates for other profile options or printer settings as required, AutoConfig, change of the passwords (SYS/SYSADMIN/APPS etc). De-register and re-register OID/OAM if it is SSO integrated. Customizations and interfaces may need some updates.
All this can be easily scripted.
With cloud based deployment provisioning of the new instances can be very easy. Just create the virtual machine image from the source environment and store it in your private repository. And later provision new instances from the template you have created. You can automate first boot sequence to launch a process that applies the required configuration post updates, including custom ones.
So what is required in EBS to support this feature?
Does E-Business suite support the logical hostname at all?
For release 11i, R12.0 & R12.1, only physical hostname is supported. It requires changes on OS level so that “hostname” and “uname -n” calls would return the required hostname value.
What about the latest R12.2 release? Yes, we have an official supported way of doing it. We configure the EBS configuration with a logical hostname, but remain the OS host with its own physical hostname set.
R12.2 requires AD & TXK Delta 9 and one extra FND one-off to support the logical hostname feature. You still need to maintain it inside /etc/hosts as an alias.
The following context variables can be updated to target your system to logical hostname, like s_hostname.
Also, you have to manually update the WebLogic domain server and machine references.
Web Entry points are optional if you are terminating your end user access externally (like load balancer or reverse proxy).
What is changed?
We can set all our s_hostname and other host references in the context file to use the logical host values. AutoConfig execution will make them live except the change of a context name, where you still need to follow a standard in-place clone process. There is a new context name variable “s_physical_hostname” introduced and the value should be the real physical hostname of the hosted machine. Yes, you still need to maintain it, and the only place it shows up is the environment set file that does the validation where you are located. It will trigger an error if physical hostname is not equal to logical value and at the same time differs from real machine hostname.
So all this is not a 100% release from a physical hostname dependency yet, but it’s not a functional dependency anymore. Not ideal but good enough, per my view.
This FND patch delivers the changes for Concurrent Manager runtime scripts and FNDLIBR code.
Its not verifying the server hostname anymore using “uname -n” as before, and logical hostname value is passed to the target process as an argument.
Here are couple of examples.
What are the options for systems running below R12.2 and technology delta 9?
Concurrent manager startup is the main challenge as it validates the server hostname set if it is matching the configuration reference.
For R12.2 there is another dependency - Weblogic domain, which includes hardcoded hostname references in servers and machine records.
So do we have any alternatives?
One of the options is to fake the hostname on your Linux server.
Create and compile a C program that replaces the default OS call of the gethostname() function and returns you the desired value as per environment variable set. Normally we need a 32-bit library to be set, but for R12.2 we need to compile both 32-bit and 64-bit libraries. But in scope for R12.2 it is highly recommended to implement the official native support instead of playing around with these custom unsupported tweaks. We just need to set the LD_PRELOAD and point it to our new compiled library to enable it.
This tweak can be also added to the bash profile and be activated before sourcing your E-Business Suite environment file.
Here is a quick example of using this tweak. We have 32-bit and 64-bit libraries.
Initially the hostname call returns us the original physical hostname of the server, and once we set the LD_PRELOAD, we get our custom value set in the MYHOSTNAME variable.
For additional configuration details, you can refer to this blog post that our team has published.
Let us talk about other more advanced options, like Linux Containers.
What are Linux Containers?
It is a different type of virtualization, when your hardware is not emulated for a guest Operating System, rather same Kernel resources are shared between the host and all the guests. Using a modern language – this is a container based virtualization. Containers have isolated process tree, security layer, like OS users, separate networking and storage. This feature is available with 2.6.24 kernel (RHEL / OEL 6).
With RHEL 7 LXC became deprecated and is replaced by LXD. This is the same thing like LXC but with a more wider domain-based scope of the deployment.
If you don’t need that and looking for something more simple - LXC still can be installed from EPEL repository.
You can deploy different OS distributions with supported kernel, you are not locked with the host OS release. For example run Red Hat 7 on the host but deploy Red Hat 6 for your R12.1.3 EBS Apps tiers.
OEL5 can be also deployed, but only with Oracle Unbreakable kernel. Standard Red Hat compliant kernel is not supported. This might be a useful case for 11i users, as recent OS releases are not certified.
This is the classical virtualization illustration. We have a HyperVisor virtualization layer that emulates all the hardware for Guest Operating System running.
And how this is different with containers. As you can see the Hardware emulation and complete guest Operating System layers are gone.
By default containers are stored under /container disk path. Default log path: /var/log/lxc.
Few examples how to create a guest container from template, start it, or enter the terminal console.
Your Host resources can be shared and controlled how much is allowed to be used by a certain container. Like CPU, IO, Memory.
E-Business Suite use case is simple. For your deployment you can just create a dedicated container instance, configure the logical hostname inside that container instance. And it is going to work as standard “physical” hostname and going to be returned by OS “hostname” or “uname -n” calls. It is a solution for any EBS release since we are emulating a physical host.
For DR you can rsync the Apps file system between local and remote container. Or keep containers in sync on a lower level between the repositories.
For cloning and instance provisioning Linux containers support in-place cloning or template creation, that can be easily transferred to other target host.
Officially the container support is certified only for R12.2, and only if Oracle’s own Unbreakable kernel is in use.
So in other scenarios you are doing it on your own risk and Oracle may ask you to move away from container and reproduce any problem you encounter.
This basically illustrates the idea. This is the same diagram we were showing before, just adding the container layer here.
You can sync the content under container repository, or sync Apps file system within the container itself. It’s up to you.
Same idea for cloning. You can create a new template, transfer it, and create a new container based on the template on target side.
Or just do a simple classical copy of Apps Base between the containers on both sides.
As I have already mentioned, LXC is deprecated, but still valid for simple deployments. Domain based deployment of the containers is the future now and becomes the real trend.
Solutions available are LXD, very popular Docker, and Kubernetes. Container services are now provided by major cloud players like AWS, Azure and Google Cloud.
There are no official templates yet for Oracle E-Business Suite, but you can make your own, deploy them, and test them if suit your infrastructure and business requirements.
This can be considered for Test and Development lifecycle, but not yet production due to limited official support certification – R12.2 and Unbreakable kernel only.
You can even look at options of supplying personal Docker images to developers. It just requires a little bit of free resources but doable.
To summarize what we have discussed so far.
There is an official support for logical hostnames, but only with R12.2. It can streamline the instance lifecycle management and rapid process of cloning. DR failover window is minimum and requires only a service restart. Easier instance provisioning from images if running in cloud.
There are unofficial options available for releases before R12.2 that are production proven and stable enough, including the container based deployment that can be considered.