Axa Assurance Maroc - Insurer Innovation Award 2024
Highly Available BizTalk
1. Highly Available BizTalk
Concepts Around the Implementation of
BizTalk Server 2006 in a High Availability Environment
Daniel Toomey & Patrick Hood
presenters
2. Goal of This Presentation
Not highly technical (i.e. no code)
Not demo-heavy
Discussion of the main concepts and strategy
of HA in an integration solution
Understand Microsoft recommended
practices
3. Definition of High Availability (HA)
“…is a system design protocol and associated
implementation that ensures a certain
absolute degree of operational continuity
during a given measurement period.”
(www.wikipedia.org/wiki/High_Availability)
4. Definition of High Availability (HA)
Redundancy of each functional component
Seeks to eliminate “single point of failure”
Single component failure triggers recovery
mechanisms that are transparent to users of
the system
5. High Availability vs High Performance
High Availability is about Failover
Does not necessarily involve load balancing
Active/Passive configuration
Scaling Out
High Performance is about… Performance!!
Typically about load balancing and managing
high throughput
Active/Active configuration
Scaling Out or Scaling Up
Not necessarily Highly Available
7. High Availability in BizTalk
BizTalk Components
Databases (SQL Server)
Services (Host Instances)
Adapters (Send / Receive)
Enterprise Single Sign-On (SSO)
Process for ensuring High Availability is
different for each of these components
8. BizTalk Groups
Out-of-the-box functionality for BizTalk allows
for the easy (and default) establishment of
“BizTalk Groups”
A BizTalk Group is a collection of servers that
host BizTalk services (hosts) which operate
upon the same Message Box(es)
All hosts within a BizTalk Server Group are
based upon the same set of configuration and
message storage databases
Automatic Load Balancing
9. HA for BizTalk Databases
SQL Server supports BizTalk through data
persistence:
Stores all configuration, business rules, message state
and tracking info
Stores the messages themselves
Separates data from hosts that process the data
The most critical component in a BizTalk architecture
Can be implemented as a SQL Server Cluster
(active / passive mode)
11. HA for BizTalk Databases
1. Create global domain accounts
2. Configure the SQL Server cluster before
BizTalk installation
3. Install BizTalk
4. Run the BizTalk Configuration Wizard in
custom configuration mode
5. Specify the SQL Server cluster for the
BizTalk databases
12. HA for BizTalk Databases
Failover Behaviour in BizTalk:
BizTalk databases are temporarily unavailable during
failover
In-process host instances are recycled until connection
to the SQL Server is automatically restored
Isolated host instances are paused, an error is
generated in the BizTalk Server 2006 Application log
and receive locations are disabled
Once connection to the SQL databases is restored,
document processing resumes normally and receive
locations are enabled
13. HA for BizTalk Databases
SQL Server Database Mirroring
Not currently a supported solution for ensuring
high availability of the Microsoft BizTalk
Server 2006 databases
Potential problems maintaining transactional
consistency in the BizTalk databases
Log Shipping is the recommended practice for
Disaster Recovery
14. HA for BizTalk Hosts
Hosts provide logical containers for
functionality:
Receiving
Sending
Processing
Recommended practice is to create hosts for
each separate functionality
Creates security boundaries
Easier management & scalability
15. HA for BizTalk Hosts
In-process Hosts
Run inside of BizTalk runtime process
Contain all non-Web-based artefacts:
Orchestrations
Adapter send handlers
Adaptor receive handlers
(except for HTTP & SOAP)
Isolated Hosts
Do not run inside of BizTalk runtime process
HTTP and SOAP receive handlers
16. HA for BizTalk Hosts
BizTalk Server 2006 lets you separate hosts and run
multiple host instances to provide high availability
No additional clustering or load-balancing mechanism
required because BizTalk Server 2006 automatically
distributes workload across multiple computers
through host instances
However, hosts running the receive handler for the
following adapters require a load-balancing
mechanism such as Network Load Balancing (NLB)
to provide high availability:
HTTP
SOAP
BizTalk Message Queuing (MSMQT)
17. HA for BizTalk Hosts (Receiving)
Scaled Out Receiving Hosts
18. HA for BizTalk Hosts (Receiving)
Scaled Out Receiving Hosts (multiple clients)
19. HA for BizTalk Hosts (Receiving)
Using host instances on multiple computers:
FILE Adapter (point host instances to same UNC path)
SQL Adapter (point host instances to same database table)
Using host instances on multiple computers with NLB:
HTTP Adapter (subscribe to a shared clustered URL)
Web Services Adapter (NLB distributes incoming messages)
SharePoint Adapter (subscribe to a shared URL)
MSMQT Adapter (NLB distributes incoming messages)
Using a clustered BizTalk host (req. Enterprise Edition):
FTP Adapter
POP3 Adapter (multiple concurrent connections)
MSMQ Adapter
20. HA for BizTalk Hosts (Receiving)
Adapter Type Default Config NLB Cluster Clustered Host
FILE
HTTP
SOAP
SQL
WSS
FTP
POP3
MSMQ
21. HA for BizTalk Hosts (Processing)
Scaled Out Processing Hosts
22. HA for BizTalk Hosts (Processing)
Scaled Out Processing Hosts
Orchestration state is maintained centrally
in SQL Server, not locally on each BizTalk
Server computer
BizTalk load balances automatically
One instance can complete a process
started by another instance
Proof – of – Concept
23. BizTalk Host Load Balancing
Used “CallOrchestration” sample from SDK
Inserted Delay shapes and trace messages to
log the step and the processing server
Deployed to two servers in a BizTalk Group
Submitted 1000 files
Analysed the resulting logs
For more than 25% of the files, processing
steps were divided across more than one
individual server (i.e. host instance)
24. HA for BizTalk Hosts (Sending)
Scaled Out Sending Hosts
25. HA for BizTalk Hosts (Sending)
Scaling Out Sending Hosts
Similar to Processing Hosts – Host & Data Independence
Special Considerations:
FTP Send Adapter
Run in a clustered BizTalk Host
Supports only one host instance running at a time
MSMQ Adapter
Cluster the MSMQ Service
Cluster the BizTalk Host in the same group
Configure MSMQ Send Handler within clustered host
26. BizTalk Host Clustering
Only necessary for certain adapters
Requires BizTalk 2006 Enterprise Edition
Requires BizTalk Servers to be configured as a Windows
Server Cluster first
Considerations:
Non-clustered host should not be run on a Windows Server
cluster where Enterprise SSO is clustered
More info:
http://msdn2.microsoft.com/en-us/library/aa560059.aspx
27. Network Load Balancing (NLB)
As previously mentioned, the following adapters
require a load-balancing mechanism such as
Network Load Balancing (NLB) to provide high
availability:
HTTP
SOAP
BizTalk Message Queuing (MSMQT)
Can load-balance the BAM portal & BAS website
Provides High Availability at the Network level, rather
than the Resource level
28. Network Load Balancing (NLB)
NLB farm of servers appears as one server to
clients
Distributes load between the servers in the
farm
Each server in the NLB farm is aware of each
other and automatically handle server
unavailability
Each server is fully self-contained
BizTalk grouping provides balancing on
hydration of long-running processes
30. Network Load Balancing (NLB)
Easier and more flexible management
Rolling OS update & software deployment
Uninterrupted availability and fault tolerance
Server failure & hardware update/replacement
Better scalability
True horizontal scalability
Up to 32 servers in an NLB farm
Multiple farms via DNS round-robin
31. Network Load Balancing (NLB)
Option of Hardware-based or Software-based
NLB solution
Hardware-based solution consists of a
specialised network appliance e.g.
F5 Networks
Radware
Cisco
Foundary
Alteon
32. Windows NLB
Full software NLB implementation
Supported on all versions of Windows
2003 Server
Supported on Windows 2000 Advanced
Server and Datacenter Server Editions
Generally a 5-10% overhead per server
MSCS and Windows Network Load
Balancing (NLB) are NOT supported on
the same set of nodes
33. Windows NLB
Consider NICs & Unicast vs. Multicast
Mode & Number of NICs Use
Single network adapter in unicast mode A cluster in which ordinary network communication
among cluster hosts is not required and in which there
is limited dedicated traffic from outside the cluster
subnet to specific cluster hosts.
Multiple network adapters in unicast mode A cluster in which ordinary network communication
among cluster hosts is necessary or desirable. It is also
appropriate when you want to separate the traffic used
to manage the cluster from the traffic occurring
between the cluster and client computers.
Single network adapter in multicast mode A cluster in which ordinary network communication
among cluster hosts is necessary or desirable but in
which there is limited dedicated traffic from outside the
cluster subnet to specific cluster hosts.
Multiple network adapters in multicast mode A cluster in which ordinary network communication
among cluster hosts is necessary and in which there is
heavy dedicated traffic from outside the cluster subnet
to specific cluster hosts.
34. Windows NLB
Port-rules – multiple-host or single-host
Affinity - can be set to:
None
Single-client (or sticky-IP)
Class C
Host Priorities
For BizTalk NLB, recommend multiple
host, no affinity, even priority
35. Enterprise Single Sign-On (SSO)
Critical part of the BizTalk infrastructure
Helps to secure information for the receive
locations
Master Secret Server
Stores the encryption key used to secure data
in the credentials database
Must configure the first computer where SSO
is installed as the Master Secret Server
36. Enterprise Single Sign-On (SSO)
If Master Secret Server fails, currently running
operations continue but cannot encrypt new
credentials
BizTalk Server dependency on Master Secret
Server:
37. High Availability for Ent SSO
Master Secret Server CANNOT exist on an
NLB cluster
Master Secret Server can be moved from
BizTalk NLB servers (often to SQL Server
infrastructure)
Master Secret Server can be clustered
38. Summary
In an multi-system environment, High Availability
means securing not only the individual systems
themselves but also the integration architecture
BizTalk Server 2006 can be implemented to support
High Availability using a variety of techniques and
configurations for the various components:
OTB functionality via BizTalk Server Groups
SQL Server Failover Cluster
Windows NLB Cluster
Clustered Hosts
39. References
Planning for High Availability
http://msdn2.microsoft.com/en-us/library/aa558765.aspx
Planning Your Platform for Fault Tolerance
http://msdn2.microsoft.com/en-us/library/aa560135.aspx
Creating a Highly Available BizTalk Server Environment
http://msdn2.microsoft.com/en-us/library/aa560847.aspx
Sample BizTalk Server High-Availability Scenarios
http://msdn2.microsoft.com/en-us/library/aa578057.aspx
Providing High Availability for BizTalk Server Databases
http://msdn2.microsoft.com/en-us/library/aa559920.aspx
High Availability for Enterprise Single Sign-On
http://msdn2.microsoft.com/en-us/library/aa560674.aspx
High Availability for the BizTalk Base EDI Adapter
http://msdn2.microsoft.com/en-us/library/aa561569.aspx
High Availability and the Microsoft Operations Framework
http://msdn2.microsoft.com/en-us/library/aa560207.aspx
Using Windows Server Cluster to Provide High Availability for BizTalk Hosts
http://msdn2.microsoft.com/en-us/library/aa560059.aspx
-Subject of interest to me – latest project -Interest to at least one other member of the community -HA now not so much a luxury but a necessity, critical nature of IT systems -HA implementation in an Integration environment is a bit of a mystery
-Walk away understanding what you need to do according to MS BP -Use referenced documentation from MS
Clinical definition
-Ensuring continuity of service despite outages, maintenance, etc
-Some of the steps towards both may be similar or overlap -HA is about redundancy, scaling out -Performance is about processing power, scaling up (and/or out)
-6 databases used in BTS 2004, 8 - 12 Used in 2006 -SSO manages credentials, allows authentication
-SQL Server is the backbone to BizTalk -First place to look at implementing HA
-Typical BizTalk HA architecture -Use of a SAN or disk array is recommended (redundancy)
-Slight pause in BizTalk operations while the active node switches over -Important: BizTalk will resume operations all by itself -Justice example
-SQL Server consultants will recommend this over clustering -Supported in the future?
-Hosts reside on the BizTalk servers -Installation provides for runtime functionality of the hosts -Decision for separation of hosts based on application needs, configuration
-Host instances run as services on the machine -One to many -Exception: endpoint-based receives (URL, IP address) -Our example (QCS)
-Solution has three types of hosts defined -Receiving host is duplicated into two instances, two machines
-don’t need all nine replications, six would do
-Use NLB for distributions handling outside of BizTalk -FTP does not have concept of locking files, need cluster to prevent duplicate processing
Recommended configurations for High Availability receive adapters
-Simpler than receive -Separation of data from processing -Centrally managed state
Easier and more flexible management: move the workload onto particular servers within a cluster (e.g to update a server without impacting accessibility of data and services to clients). Uninterrupted availability and fault tolerance: If a server fails, clustering software detects the failure and fails over to a remaining server. Each server in the NLB farm is aware of each other and automatically handle server unavailability. Better scalability: Load balancing can be scaled across multiple servers in a cluster. Applications that are written to run on server clusters can perform dynamic load balancing.
Easier and more flexible management: move the workload onto particular servers within a cluster (e.g to update a server without impacting accessibility of data and services to clients). Uninterrupted availability and fault tolerance: If a server fails, clustering software detects the failure and fails over to a remaining server. Each server in the NLB farm is aware of each other and automatically handle server unavailability. Better scalability: Load balancing can be scaled across multiple servers in a cluster. Applications that are written to run on server clusters can perform dynamic load balancing.
Hardware-based solutions can also offer other benefits, such as unwrapping SSL, alerting / triggers, custom script. May need to consider redundancy when using hardware device for HA.
Affinity: None - When no client affinity mode is selected, Network Load Balancing load-balances client traffic from one IP address and different source ports on multiple-cluster hosts. This maximizes the granularity of load balancing and minimizes response time to clients. Single-client - To assist in managing client sessions, the default single-client affinity mode load-balances all network traffic from a given client's IP address on a single-cluster host. Class C - The class C affinity mode further constrains this to load-balance all client traffic from a single class C address space. See the "Managing Application State" section below for more information on session support.