SlideShare ist ein Scribd-Unternehmen logo
1 von 36
Integration Technical Conference 2019
E03: High Availability
for Cloud Integration
Platform
Rob Nicholson
rob_nicholson@uk.ibm.com
Integration Technical Conference 2019 2
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
3IntegrationTechnical Conference 2019
Cloud Integration
PlatformArchitecture
ICP Services
IAM Logging
Monitoring Metering
Integration Services
MQ
Event Streams
App Connect DataPower
API Connect Aspera
Cloud Integration Platform Management UI
ICP Services UI Components Integration Services UI Components
Kubernetes
Master/Proxy
Nodes
Management
Nodes
Worker
Nodes
Worker
Nodes
Worker
Nodes
• Cloud Integration Platform (CIP) deploys and manages
instances of Integration Services running on a Kubernetes
Infrastructure
• Instances of Integration services are deployed individually as
needed to satisfy each use-case.
• Services can be deployed in Highly Available topologies
across multiple Kubernetes worker nodes or non HA on a
single worker.
• Deployment and management is via the UI or CLI allowing
integration with CI/CD pipelines.
• CIP Leverages the IBM Cloud Private (ICP) services which run
on dedicated master/proxy and Management nodes in HA or
non HA configurations.
• The Management UI unifies the management UIs of the
Integration Services and the ICP services.
Integration Technical Conference 2019 4
Container Platform Availability
• Cloud Integration Platform is based on ICP Foundation.
• ICP Foundation is functionally identical to ICP Cloud Native edition. The only difference is licensing.
• All HA, sizing and deployment information provided for ICP Cloud Native edition applies to Cloud Integration
Platform.
• This presentation provides a summary. For more detail consult ICP Cloud Native edition documentation, articles
and education. See the links page at the end of this presentation.
5IntegrationTechnical Conference 2019
Node Types
• A kubernetes Node is a VM or bare metal machine which is part
of a Kubernetes cluster.
• Master Nodes run the services that control the cluster including
the etcd database that stores the current state of the cluster.
• Proxy Nodes transmit external requests to the services created
inside your cluster.
• Management Nodes host management services such as
monitoring, metering, and logging.
• Worker Nodes run Integration Services. If Cloud Integration
Platform is running in a cluster with other workloads then these
also run on the worker nodes.
• Master, Proxy and Management node types can be combined
together but it is recommended in production clusters to keep
them separate so that excessive workload does not impact
stability.
• The minimum theoretical configuration is therefore one
Master/Proxy/Management node and one Worker node. This
would not be Highly available and is unlikely to have enough CPU
to be usable for more than limited demos.
Note: There are other more obscure node types such as dedicated etcd nodes and vulnerability
advisor nodes but these are beyond the scope of this presentation
Cloud Integration Platform Cluster
Node types
Master
Worker
Proxy
Mngmt
Node
Cloud Integration Platform
Theoretical minimum cluster
Master/
Proxy/
Mngmt
Worker
6IntegrationTechnical Conference 2019
Container Platform Availability
• To make the solution fully Highly Available, each component
must be deployed in HA topology.
• Master Nodes contain software that uses a quorum* paradigm
for high availability and so these must be deployed as an odd
number of nodes. Typically either 3 or 5 masters are used in a HA
cluster depending on the size of the cluster and the type of load.
• Proxy Nodes do not require a quorum so 2 or more Proxy nodes
are needed for HA
• Management Nodes do not require a quorum so 2 or more Proxy
nodes are needed for HA
• Worker Nodes run Integration Services. Depending on the
integration services required 2 or more worker nodes may be
needed for HA. (More detail in subsequent slides)
• As previously noted of course Master/Proxy/Management nodes
can be combined so a minimal configuration could be 3
Master/Proxy/Management nodes.
• A topology that is often used is: 3 Master/Proxies + 2
Management + 3 Workers.
Master Master Master
Proxy Proxy
Mngmt
Node
Mngmt
Node
Worker Worker
Cloud Integration Platform HA Cluster
Master/
Proxy/
Mngmt
Cloud Integration Platform minimal HA
Cluster
Master/
Proxy/
Mngmt
Master/
Proxy/
Mngmt
Worker Worker
Taken from ICP Think 2019 session.
7Think 2019 / 5964A / Feb 15, 2019 / Š 2019 IBM Corporation
Machine Role Number vCPU (>= 2.4 GHz) Memory Disk Space Comment
Master 3 16 32GB 500GB 3 for HA
Management 2 16 32GB 500GB 2 for HA
Proxy 2 4 16GB 400GB 2 for HA
Vulnerability Advisor 1 8 32GB 500GB Optional (none-HA)
Worker Nodes 2-50 8 32GB 400GB
A typical production environment
http://ibm.biz/icpcapacityplan
Integration Technical Conference 2019 8
Deployment Considerations
• How many independent clusters do you need?
• Enterprises deploy multiple clusters for any/all of the following reasons:
• Geographic availability – Ability to survive a regional outage
• To separate organizations
• To separate development, test and production.
• To guard against errors by individual operators.
• Are the Kubernetes nodes deployed into separate failure domains?
• Separate Availability zones in a public cloud.
• Separate physical servers/racks in a data centre
• What are the common points of failure?
• See https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozon
Example – HA Cluster in AWS
Leverage available zone
– Master/mgmt/va across available
zone
– User application across available
zone
AWS ALB/NLB
– Load balancer for management
control plane
– Load balancer for user application
– Security group to control network
access
EBS as persistent storage
May 28, 2019 ICP Solutioning Guide 101 | IBM Confidential | IBM Cloud Solutioning Centers
Think 2019 / 5964A / Feb 15, 2019 / Š 2019 IBM Corporation
Article: http://ibm.biz/icponaws
Documentation:
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.1.2/supported_environments/aws/overview.html
links to a quickstart: https://aws.amazon.com/quickstart/architecture/ibm-cloud-private/
10IntegrationTechnical Conference 2019
Public Cloud Deployment
• ICP 3.1.1 is supported on IBM Cloud and Amazon.
• Current CIP is based on ICP 3.1.1
• ICP 3.1.2 adds support for azure Azure
• On public clouds it is recommended to use the cloud storage
solution.
• IBM Block Storage
• Amazon EBS
• Distribute cluster across 3 Availability zones with
less than 30ms latency.
• Multiple AZs in single region. Not multiple regions.
Integration Technical Conference 2019 11
High Availability for Integration services.
Integration Services run on the worker nodes.
The Cloud Integration SolutionPak is composed from the CloudPaks making from the component
products.
IBM cloudpaks are developed by the product development teams to embody best practice for deploying
the product onto kubernetes as a secure, scalable Highly Available deployment.
Thus, at a high level, all that has to be ensured to provide a highly available deployment is:
• There are sufficient nodes for the solution*
• The nodes are deployed into separate failure domains so that a single failure will not take out
multiple nodes. **
Kubernetes takes care of:
• Running appropriate numbers of instances of the solutionpak, spread across the available workers.
• Mixing together the workloads on the available worker nodes taking account the resources they
need.
• Restarting workloads that fail and recovering from failed nodes by scheduling workloads onto
alternative nodes.
* Some of the component products require 2 worker nodes for active/standby syle HA and others require 3 or more workers for
quorum style deployment.
** In larger clusters the nodes should be spread between 3 ore more availability zones.
Master Master Master
Proxy Proxy
Mngmt
Node
Mngmt
Node
Worker Worker
Cloud Integration Platform HA Cluster
Worker
12IntegrationTechnical Conference 2019
Myth Busting
Master Master Master
Proxy Proxy
Mngmt
Node
Mngmt
Node
Worker Worker
Cloud Integration Platform HA Cluster
Worker
• MYTH 1: I need separate worker nodes for each
Integration Service
• MYTH 2: I cannot run multiple Solutionpaks on the
same nodes as ‘CloudNative’ workloads.
• REALITY: Kubernetes will schedule pods across the
available nodes based on its scheduling policies and
the declared resource requirements. This will almost
certainly mix workloads together on nodes unless you
explicitly prevent it.
• Its possible to use taints and annotations to control
which nodes a workload is scheduled to. (Advanced
topic not covered here)
Integration Technical Conference 2019 13
Summary - High Availability for Integration.
Product Approach(es) to HA Minimum number of worker nodes in the
the cluster
MQ Data Availability – Failover
Service availability – Active/Active
2
Event Streams Quorum 3
APIC Quorum 3
App Connect Stateless – Active/active
Stateful – Failover
2
Aspera Quorum 3
Datapower Quorum 3
At a high level. All you need to do is deploy the cluster in a HA configuration with at least 3 workers.
Deploy the MQ services from the helm charts and they will be highly available by default.
Integration Technical Conference 2019 14
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
Integration Technical Conference 2019 15
IBM MQ High Availability Concepts
Availability of the ’MQ Service’.
• The “MQ Service” is available when applications can connect to an MQ queue and send and
receive messages.
• The MQ service can be made highly available by running multiple queue managers in
active/active configurations.
Availability of individual messages.
• An individual message that has been sent to an MQ Queue is Available when a client can receive
that particular message.
• The data can be made highly available by minimizing the time when the actual queue manager
holding the message is not available.
IBM MQ High Availability Options on Distributed Servers
HA
Storage
Local
Storage
Local
Storage
Local
Storage
Single resilient queue
manager
Multi-instance queue
manager
Replicated data
queue manager
HA
Storage
MQ MQ
MQ
(Stand-by)
MQ
MQ
(Stand-by)
MQ
(Stand-by)
Platform Load Balancing
Platform HA
Client load balancing
IBM MQ Product HA
Client or Server load balancing
IBM MQ Product HA
IBM MQ High Availability in Cloud Pak / CIP
HA
Storage
Local
Storage
Local
Storage
Local
Storage
Single resilient queue
manager
Multi-instance queue
manager
Replicated data
queue manager
HA
Storage
MQ MQ
MQ
(Stand-by)
MQ
MQ
(Stand-by)
MQ
(Stand-by)
Platform Load Balancing
Platform HA
Current Approach
Client load balancing
IBM MQ Product HA
Client or Server load balancing
IBM MQ Product HA
Investigating use of
multi-instance within
Kubernetes
Technology not supported
in Kubernetes
Container Orchestration
Single Resilient Queue Manager on
Kubernetes
• Container restarted by Kubernetes*
• Data persisted to external storage
• Restarted container connects to existing
storage
• IP Gateway routes traffic to the active
instance
• Implemented as Kubernetes Stateful set
of 1
• Implications: Both the service and the
messages stored in the single queue
manager are unavailable during failover
IP Gateway (from container platform)
HA
Network
Storage
Application
* Total Kubernetes node failure considerations described in
https://developer.ibm.com/messaging/2018/05/02/availability-scalability-ibm-mq-containers/
MQ
Improving the MQ Service availability
Hybrid Cloud / March 2018 / Š 2018 IBM Corporation
Connection Routing
Application
Connection Routing
Application
Single resilient container Multiple resilient containers
 Provide access to
store messages,
even in failover
situations.
Connection Routing
provided to
distribute the traffic
across the available
instances.
 Applications
retrieving messages
attach to the
individual Queue
Manager.
Connection Routing
provided to route
traffic to container
location.
Application Application Application
MQ
MQ MQ
Connection Routing Connection Routing Connection Routing
ApplicationApplicationApplicationApplication
MQ GET
MQ PUT MQ PUT MQ PUT MQ PUT MQ PUT
MQ GET MQ GET MQ GET
Horizontal Scaling
MQMQMQ
Application Application
Connection Routing
Messaging Layer
Application
Network Connection
Container(s)
Connection Routing
Scale the MQ
instances
based on
workload
requirements
Application ApplicationApplication
Connection Routing
Application
Connection Routing
Application
Connection Routing
Application
MQ PUT MQ PUT MQ PUT MQ PUT MQ PUT
MQ GET MQ GET MQ GET
Connection Routing
Hybrid Cloud / March 2018 / Š 2018 IBM Corporation
Static Routing Client Connection
Definition Table
Load Balancer
Client embeds endpoints
Performance impact when
primary unavailable
Brittle configuration
No load balancing
Client references endpoints
Enhanced Workload
Management Strategies
Central configuration
management
Client references endpoints
Enhanced Workload
Management Strategies
Central configuration
management
Not recommended for JMS
Endpoint List
Application
MQ MQ
CCDT
Application
MQ MQ
IP Gateway
Application
MQ MQ
Recommended Routing
MQMQMQ
Application Application
IP Gateway
Messaging Layer
Application
Network Connection
Container(s)
Connection Routing
Application ApplicationApplication
IP Gateway
Application
IP Gateway
Application
IP Gateway
Application
MQ PUT MQ PUT MQ PUT MQ PUT MQ PUT
MQ GET MQ GET MQ GET
CCDT CCDT CCDT CCDT CCDT
IP Gateway IP Gateway
Integration Technical Conference 2019 23
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
24IntegrationTechnical Conference 2019
Event Streams High
Availability
• Event streams deploysApache Kafka in a HA topology by
default.
• Chart deploys brokers as a stateful set with 3 members by
default.
• Kubernetes will schedule the pods to different nodes.
• Kafka’s architecture is inherently HA. Client connection
protocol handles discovery and failure.
• Default forTopics creation is 3 replicas with min in-sync
copies=2
• For Multi-AZ deployments number of brokers must
match number of AZs today.
24
Client
Intelligent client discovery and reconnection logic
Client
Integration Technical Conference 2019 25
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
26IntegrationTechnical Conference 2019
API Connect High Availability
ICP Cluster
Worker
Node 1
Worker
Node 2
Worker
Node 3
Gateway
Analytics
Portal
Manager
• API Connect requires 3Worker nodes for HA.
• Chart deploys as HA by default:
 global setting: mode = standard (dev mode is non HA)
 Cassandra cluster size = 3
 Gateway replica count = 3
• Kubernetes distributes the resulting resources
across 3 (or more) nodes on the cluster for high
availability.
API Connect whitepaper discusses multi-cluster, multi
data centre and DR topics.
http://ibm.biz/APIC2018paper
Integration Technical Conference 2019 27
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
28IntegrationTechnical Conference 2019
App Connect HighAvailability
• ACE Control plane is HA (replicaset of 3) by
default.
• Integration servers deployed without local MQ
Queue Managers (QM) are stateless.
 Deployed HA (replicaset of 3) by default.
 BAR file is retrieved from control plane at startup.
• Integration servers deployed with a local QM are
deployed like MQ.
 Single Resilient Queue Manager (stateful set of 1).
ICP Cluster
Worker
Node 1
Worker
Node 2
Worker
Node 3
ACE Control plane
ACE Stateless servers
ACE Stateful servers
CP CP CP
ACE
Server
ACE
Server
ACE
Server
ACE +
MQ
Integration Technical Conference 2019 29
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
30IntegrationTechnical Conference 2019
Aspera HSTS
• Aspera HSTS has an architecture that
can be deployed Highly Available.
• Aspera HSTS in CIP GA code cannot
be deployed as Highly available
directly from the helm chart.
• The deployment can be modified to
make it highly available.
31IntegrationTechnical Conference 2019
Aspera HSTS – Routing HA
Aspera HSTS serves two types of traffic
• HTTPS API traffic
• This traffic is routed via ICP’s
Ingress Controller, with High
Availability
32IntegrationTechnical Conference 2019
Aspera HSTS – Routing HA
• FASP Transfer traffic
• This traffic is routed by custom
proxies built by Aspera
• These can be scaled horizontally
as a ReplicaSet and are
redundantly available
• Proxies co-ordinate with Load
Balancer to find available
transfer pods.
• Load Balancer is stateless and
can be as ReplicaSet
33IntegrationTechnical Conference 2019
Aspera HSTS – Application HA
Aspera Node API is a stateless HTTP
server backed by an HA Redis Stateful
Set
34IntegrationTechnical Conference 2019
Aspera HSTS – Application HA
• Aspera Transfer Pods are each
dedicated to a single transfer session
• Swarm Service manages Transfer
Pods– auto-scaling as needed and
ensuring free pods are available at all
times
• If the Swarm Service dies, no existing
traffic will be impacted, but no new
transfer pods will be scheduled until
it is restarted
• Aspera Swarm Service is currently
deployed as a single instance
• Swarm Service will be adding leader
election as part of L2 Certification
35IntegrationTechnical Conference 2019
Aspera HSTS – Control Plane
• Stats Service consumes transfer
events from Kafka and publishes pod
availability for consumption by
Aspera Load Balancer
• Stats Service is currently a single
instance, but will be updated to use
Leader Election as part of L2
Certification
• If Stats Service fails, load-balancing
will be based on stale information
until Stats Service restarts
• Kafka is provided by IBM Event
Streams, which is an HA service
ThankYou

Weitere ähnliche Inhalte

Was ist angesagt?

Building a Better Business Case for Migrating to Cloud
Building a Better Business Case for Migrating to CloudBuilding a Better Business Case for Migrating to Cloud
Building a Better Business Case for Migrating to CloudAmazon Web Services
 
IBM MQ - What's new in 9.2
IBM MQ - What's new in 9.2IBM MQ - What's new in 9.2
IBM MQ - What's new in 9.2David Ware
 
IBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
IBM Integration Bus & WebSphere MQ - High Availability & Disaster RecoveryIBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
IBM Integration Bus & WebSphere MQ - High Availability & Disaster RecoveryRob Convery
 
Microservices architecture
Microservices architectureMicroservices architecture
Microservices architectureFaren faren
 
Executing a Large-Scale Migration to AWS
Executing a Large-Scale Migration to AWSExecuting a Large-Scale Migration to AWS
Executing a Large-Scale Migration to AWSAmazon Web Services
 
IBM Cloud Pak for Integration 2020.2.1 installation
IBM Cloud Pak for Integration 2020.2.1 installation IBM Cloud Pak for Integration 2020.2.1 installation
IBM Cloud Pak for Integration 2020.2.1 installation khawkwf
 
Introduction to the Well-Architected Framework and Tool - SVC208 - Anaheim AW...
Introduction to the Well-Architected Framework and Tool - SVC208 - Anaheim AW...Introduction to the Well-Architected Framework and Tool - SVC208 - Anaheim AW...
Introduction to the Well-Architected Framework and Tool - SVC208 - Anaheim AW...Amazon Web Services
 
AWS Summit Seoul 2023 | AWS에서 최소한의 비용으로 구현하는 멀티리전 DR 자동화 구성
AWS Summit Seoul 2023 | AWS에서 최소한의 비용으로 구현하는 멀티리전 DR 자동화 구성AWS Summit Seoul 2023 | AWS에서 최소한의 비용으로 구현하는 멀티리전 DR 자동화 구성
AWS Summit Seoul 2023 | AWS에서 최소한의 비용으로 구현하는 멀티리전 DR 자동화 구성Amazon Web Services Korea
 
IBM MQ cloud architecture blueprint
IBM MQ cloud architecture blueprintIBM MQ cloud architecture blueprint
IBM MQ cloud architecture blueprintMatt Roberts
 
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20Amazon Web Services Korea
 
MSA ( Microservices Architecture ) 발표 자료 다운로드
MSA ( Microservices Architecture ) 발표 자료 다운로드MSA ( Microservices Architecture ) 발표 자료 다운로드
MSA ( Microservices Architecture ) 발표 자료 다운로드Opennaru, inc.
 
Micro services Architecture
Micro services ArchitectureMicro services Architecture
Micro services ArchitectureAraf Karsh Hamid
 
Cloud Adoption Framework Phase one-moving to the cloud
Cloud Adoption Framework Phase one-moving to the cloudCloud Adoption Framework Phase one-moving to the cloud
Cloud Adoption Framework Phase one-moving to the cloudAnthony Clendenen
 
Building an Active-Active IBM MQ System
Building an Active-Active IBM MQ SystemBuilding an Active-Active IBM MQ System
Building an Active-Active IBM MQ Systemmatthew1001
 
Amazon RDS Proxy 집중 탐구 - 윤석찬 :: AWS Unboxing 온라인 세미나
Amazon RDS Proxy 집중 탐구 - 윤석찬 :: AWS Unboxing 온라인 세미나Amazon RDS Proxy 집중 탐구 - 윤석찬 :: AWS Unboxing 온라인 세미나
Amazon RDS Proxy 집중 탐구 - 윤석찬 :: AWS Unboxing 온라인 세미나Amazon Web Services Korea
 
4. 대용량 아키텍쳐 설계 패턴
4. 대용량 아키텍쳐 설계 패턴4. 대용량 아키텍쳐 설계 패턴
4. 대용량 아키텍쳐 설계 패턴Terry Cho
 
Amazon EKS - Elastic Container Service for Kubernetes
Amazon EKS - Elastic Container Service for KubernetesAmazon EKS - Elastic Container Service for Kubernetes
Amazon EKS - Elastic Container Service for KubernetesAmazon Web Services
 
IBM MQ Online Tutorials
IBM MQ Online TutorialsIBM MQ Online Tutorials
IBM MQ Online TutorialsBigClasses.com
 
IDC 서버 몽땅 AWS로 이전하기 위한 5가지 방법 - 윤석찬 (AWS 테크에반젤리스트)
IDC 서버 몽땅 AWS로 이전하기 위한 5가지 방법 - 윤석찬 (AWS 테크에반젤리스트) IDC 서버 몽땅 AWS로 이전하기 위한 5가지 방법 - 윤석찬 (AWS 테크에반젤리스트)
IDC 서버 몽땅 AWS로 이전하기 위한 5가지 방법 - 윤석찬 (AWS 테크에반젤리스트) Amazon Web Services Korea
 

Was ist angesagt? (20)

Building a Better Business Case for Migrating to Cloud
Building a Better Business Case for Migrating to CloudBuilding a Better Business Case for Migrating to Cloud
Building a Better Business Case for Migrating to Cloud
 
IBM MQ - What's new in 9.2
IBM MQ - What's new in 9.2IBM MQ - What's new in 9.2
IBM MQ - What's new in 9.2
 
IBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
IBM Integration Bus & WebSphere MQ - High Availability & Disaster RecoveryIBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
IBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
 
Microservices architecture
Microservices architectureMicroservices architecture
Microservices architecture
 
Executing a Large-Scale Migration to AWS
Executing a Large-Scale Migration to AWSExecuting a Large-Scale Migration to AWS
Executing a Large-Scale Migration to AWS
 
IBM Cloud Pak for Integration 2020.2.1 installation
IBM Cloud Pak for Integration 2020.2.1 installation IBM Cloud Pak for Integration 2020.2.1 installation
IBM Cloud Pak for Integration 2020.2.1 installation
 
Introduction to the Well-Architected Framework and Tool - SVC208 - Anaheim AW...
Introduction to the Well-Architected Framework and Tool - SVC208 - Anaheim AW...Introduction to the Well-Architected Framework and Tool - SVC208 - Anaheim AW...
Introduction to the Well-Architected Framework and Tool - SVC208 - Anaheim AW...
 
Ansible
AnsibleAnsible
Ansible
 
AWS Summit Seoul 2023 | AWS에서 최소한의 비용으로 구현하는 멀티리전 DR 자동화 구성
AWS Summit Seoul 2023 | AWS에서 최소한의 비용으로 구현하는 멀티리전 DR 자동화 구성AWS Summit Seoul 2023 | AWS에서 최소한의 비용으로 구현하는 멀티리전 DR 자동화 구성
AWS Summit Seoul 2023 | AWS에서 최소한의 비용으로 구현하는 멀티리전 DR 자동화 구성
 
IBM MQ cloud architecture blueprint
IBM MQ cloud architecture blueprintIBM MQ cloud architecture blueprint
IBM MQ cloud architecture blueprint
 
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20
AWS 기반의 마이크로 서비스 아키텍쳐 구현 방안 :: 김필중 :: AWS Summit Seoul 20
 
MSA ( Microservices Architecture ) 발표 자료 다운로드
MSA ( Microservices Architecture ) 발표 자료 다운로드MSA ( Microservices Architecture ) 발표 자료 다운로드
MSA ( Microservices Architecture ) 발표 자료 다운로드
 
Micro services Architecture
Micro services ArchitectureMicro services Architecture
Micro services Architecture
 
Cloud Adoption Framework Phase one-moving to the cloud
Cloud Adoption Framework Phase one-moving to the cloudCloud Adoption Framework Phase one-moving to the cloud
Cloud Adoption Framework Phase one-moving to the cloud
 
Building an Active-Active IBM MQ System
Building an Active-Active IBM MQ SystemBuilding an Active-Active IBM MQ System
Building an Active-Active IBM MQ System
 
Amazon RDS Proxy 집중 탐구 - 윤석찬 :: AWS Unboxing 온라인 세미나
Amazon RDS Proxy 집중 탐구 - 윤석찬 :: AWS Unboxing 온라인 세미나Amazon RDS Proxy 집중 탐구 - 윤석찬 :: AWS Unboxing 온라인 세미나
Amazon RDS Proxy 집중 탐구 - 윤석찬 :: AWS Unboxing 온라인 세미나
 
4. 대용량 아키텍쳐 설계 패턴
4. 대용량 아키텍쳐 설계 패턴4. 대용량 아키텍쳐 설계 패턴
4. 대용량 아키텍쳐 설계 패턴
 
Amazon EKS - Elastic Container Service for Kubernetes
Amazon EKS - Elastic Container Service for KubernetesAmazon EKS - Elastic Container Service for Kubernetes
Amazon EKS - Elastic Container Service for Kubernetes
 
IBM MQ Online Tutorials
IBM MQ Online TutorialsIBM MQ Online Tutorials
IBM MQ Online Tutorials
 
IDC 서버 몽땅 AWS로 이전하기 위한 5가지 방법 - 윤석찬 (AWS 테크에반젤리스트)
IDC 서버 몽땅 AWS로 이전하기 위한 5가지 방법 - 윤석찬 (AWS 테크에반젤리스트) IDC 서버 몽땅 AWS로 이전하기 위한 5가지 방법 - 윤석찬 (AWS 테크에반젤리스트)
IDC 서버 몽땅 AWS로 이전하기 위한 5가지 방법 - 윤석찬 (AWS 테크에반젤리스트)
 

Ähnlich wie IBM Cloud Integration Platform High Availability - Integration Tech Conference

MuleSoft Surat Meetup#42 - Runtime Fabric Manager on Self Managed Kubernetes ...
MuleSoft Surat Meetup#42 - Runtime Fabric Manager on Self Managed Kubernetes ...MuleSoft Surat Meetup#42 - Runtime Fabric Manager on Self Managed Kubernetes ...
MuleSoft Surat Meetup#42 - Runtime Fabric Manager on Self Managed Kubernetes ...Jitendra Bafna
 
Crossing the river by feeling the stones from legacy to cloud native applica...
Crossing the river by feeling the stones  from legacy to cloud native applica...Crossing the river by feeling the stones  from legacy to cloud native applica...
Crossing the river by feeling the stones from legacy to cloud native applica...OPNFV
 
Could the “C” in HPC stand for Cloud?
Could the “C” in HPC stand for Cloud?Could the “C” in HPC stand for Cloud?
Could the “C” in HPC stand for Cloud?IBM India Smarter Computing
 
MuleSoft Meetup Roma - CloudHub Networking Stategies
MuleSoft Meetup Roma -  CloudHub Networking StategiesMuleSoft Meetup Roma -  CloudHub Networking Stategies
MuleSoft Meetup Roma - CloudHub Networking StategiesAlfonso Martino
 
20200113 - IBM Cloud CĂ´te d'Azur - DeepDive Kubernetes
20200113 - IBM Cloud CĂ´te d'Azur - DeepDive Kubernetes20200113 - IBM Cloud CĂ´te d'Azur - DeepDive Kubernetes
20200113 - IBM Cloud CĂ´te d'Azur - DeepDive KubernetesIBM France Lab
 
Effective administration of IBM Integration Bus - Sanjay Nagchowdhury
Effective administration of IBM Integration Bus - Sanjay NagchowdhuryEffective administration of IBM Integration Bus - Sanjay Nagchowdhury
Effective administration of IBM Integration Bus - Sanjay NagchowdhuryKaren Broughton-Mabbitt
 
A generic log analyzer for auto recovery of container orchestration system
A generic log analyzer for auto recovery of container orchestration systemA generic log analyzer for auto recovery of container orchestration system
A generic log analyzer for auto recovery of container orchestration systemConference Papers
 
IBM API Connect Deployment `Good Practices - IBM Think 2018
IBM API Connect Deployment `Good Practices - IBM Think 2018IBM API Connect Deployment `Good Practices - IBM Think 2018
IBM API Connect Deployment `Good Practices - IBM Think 2018Chris Phillips
 
Essential Elements of an Enterprise PaaS
Essential Elements of an Enterprise PaaSEssential Elements of an Enterprise PaaS
Essential Elements of an Enterprise PaaSWSO2
 
ApacheCon Essential Elements of an Enterprise PaaS
ApacheCon Essential Elements of an Enterprise PaaSApacheCon Essential Elements of an Enterprise PaaS
ApacheCon Essential Elements of an Enterprise PaaSLakmal Warusawithana
 
Comparison of Current Service Mesh Architectures
Comparison of Current Service Mesh ArchitecturesComparison of Current Service Mesh Architectures
Comparison of Current Service Mesh ArchitecturesMirantis
 
Cloudify 4.6 highlights webinar
Cloudify 4.6 highlights webinarCloudify 4.6 highlights webinar
Cloudify 4.6 highlights webinarCloudify Community
 
IRJET- ALPYNE - A Grid Computing Framework
IRJET- ALPYNE - A Grid Computing FrameworkIRJET- ALPYNE - A Grid Computing Framework
IRJET- ALPYNE - A Grid Computing FrameworkIRJET Journal
 
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...Eduardo Patrocinio
 
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...Peter Broadhurst
 
Microservices Architecture with AWS @ AnyMind Group
Microservices Architecture with AWS @ AnyMind GroupMicroservices Architecture with AWS @ AnyMind Group
Microservices Architecture with AWS @ AnyMind GroupGiang Tran
 
AnyMind Group Tech Talk - Microservices architecture with AWS
AnyMind Group Tech Talk - Microservices architecture with AWSAnyMind Group Tech Talk - Microservices architecture with AWS
AnyMind Group Tech Talk - Microservices architecture with AWSNhân Nguyễn
 
M10: How to implement mq in a containerized architecture ITC 2019
M10: How to implement mq in a containerized architecture ITC 2019M10: How to implement mq in a containerized architecture ITC 2019
M10: How to implement mq in a containerized architecture ITC 2019Robert Parker
 
Microservices, Containers, and Beyond
Microservices, Containers, and BeyondMicroservices, Containers, and Beyond
Microservices, Containers, and BeyondLakmal Warusawithana
 

Ähnlich wie IBM Cloud Integration Platform High Availability - Integration Tech Conference (20)

MuleSoft Surat Meetup#42 - Runtime Fabric Manager on Self Managed Kubernetes ...
MuleSoft Surat Meetup#42 - Runtime Fabric Manager on Self Managed Kubernetes ...MuleSoft Surat Meetup#42 - Runtime Fabric Manager on Self Managed Kubernetes ...
MuleSoft Surat Meetup#42 - Runtime Fabric Manager on Self Managed Kubernetes ...
 
Crossing the river by feeling the stones from legacy to cloud native applica...
Crossing the river by feeling the stones  from legacy to cloud native applica...Crossing the river by feeling the stones  from legacy to cloud native applica...
Crossing the river by feeling the stones from legacy to cloud native applica...
 
Could the “C” in HPC stand for Cloud?
Could the “C” in HPC stand for Cloud?Could the “C” in HPC stand for Cloud?
Could the “C” in HPC stand for Cloud?
 
MuleSoft Meetup Roma - CloudHub Networking Stategies
MuleSoft Meetup Roma -  CloudHub Networking StategiesMuleSoft Meetup Roma -  CloudHub Networking Stategies
MuleSoft Meetup Roma - CloudHub Networking Stategies
 
20200113 - IBM Cloud CĂ´te d'Azur - DeepDive Kubernetes
20200113 - IBM Cloud CĂ´te d'Azur - DeepDive Kubernetes20200113 - IBM Cloud CĂ´te d'Azur - DeepDive Kubernetes
20200113 - IBM Cloud CĂ´te d'Azur - DeepDive Kubernetes
 
Effective administration of IBM Integration Bus - Sanjay Nagchowdhury
Effective administration of IBM Integration Bus - Sanjay NagchowdhuryEffective administration of IBM Integration Bus - Sanjay Nagchowdhury
Effective administration of IBM Integration Bus - Sanjay Nagchowdhury
 
A generic log analyzer for auto recovery of container orchestration system
A generic log analyzer for auto recovery of container orchestration systemA generic log analyzer for auto recovery of container orchestration system
A generic log analyzer for auto recovery of container orchestration system
 
IBM API Connect Deployment `Good Practices - IBM Think 2018
IBM API Connect Deployment `Good Practices - IBM Think 2018IBM API Connect Deployment `Good Practices - IBM Think 2018
IBM API Connect Deployment `Good Practices - IBM Think 2018
 
Essential Elements of an Enterprise PaaS
Essential Elements of an Enterprise PaaSEssential Elements of an Enterprise PaaS
Essential Elements of an Enterprise PaaS
 
ApacheCon Essential Elements of an Enterprise PaaS
ApacheCon Essential Elements of an Enterprise PaaSApacheCon Essential Elements of an Enterprise PaaS
ApacheCon Essential Elements of an Enterprise PaaS
 
Comparison of Current Service Mesh Architectures
Comparison of Current Service Mesh ArchitecturesComparison of Current Service Mesh Architectures
Comparison of Current Service Mesh Architectures
 
Cloudify 4.6 highlights webinar
Cloudify 4.6 highlights webinarCloudify 4.6 highlights webinar
Cloudify 4.6 highlights webinar
 
IRJET- ALPYNE - A Grid Computing Framework
IRJET- ALPYNE - A Grid Computing FrameworkIRJET- ALPYNE - A Grid Computing Framework
IRJET- ALPYNE - A Grid Computing Framework
 
The rise of microservices
The rise of microservicesThe rise of microservices
The rise of microservices
 
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...
Lessons Learned during IBM SmartCloud Orchestrator Deployment at a Large Tel...
 
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
 
Microservices Architecture with AWS @ AnyMind Group
Microservices Architecture with AWS @ AnyMind GroupMicroservices Architecture with AWS @ AnyMind Group
Microservices Architecture with AWS @ AnyMind Group
 
AnyMind Group Tech Talk - Microservices architecture with AWS
AnyMind Group Tech Talk - Microservices architecture with AWSAnyMind Group Tech Talk - Microservices architecture with AWS
AnyMind Group Tech Talk - Microservices architecture with AWS
 
M10: How to implement mq in a containerized architecture ITC 2019
M10: How to implement mq in a containerized architecture ITC 2019M10: How to implement mq in a containerized architecture ITC 2019
M10: How to implement mq in a containerized architecture ITC 2019
 
Microservices, Containers, and Beyond
Microservices, Containers, and BeyondMicroservices, Containers, and Beyond
Microservices, Containers, and Beyond
 

Mehr von Robert Nicholson

IBM Cloud Integration Platform Introduction - Integration Tech Conference
IBM Cloud Integration Platform Introduction - Integration Tech ConferenceIBM Cloud Integration Platform Introduction - Integration Tech Conference
IBM Cloud Integration Platform Introduction - Integration Tech ConferenceRobert Nicholson
 
IBM Hybrid Integration Platform
IBM Hybrid Integration PlatformIBM Hybrid Integration Platform
IBM Hybrid Integration PlatformRobert Nicholson
 
IBM Interconnect 2016 - Hybrid Cloud Messaging
IBM Interconnect 2016 - Hybrid Cloud MessagingIBM Interconnect 2016 - Hybrid Cloud Messaging
IBM Interconnect 2016 - Hybrid Cloud MessagingRobert Nicholson
 
Platform as a Service - CloudFoundry and IBM Bluemix - Developer South Coast
Platform as a Service - CloudFoundry and IBM Bluemix - Developer South CoastPlatform as a Service - CloudFoundry and IBM Bluemix - Developer South Coast
Platform as a Service - CloudFoundry and IBM Bluemix - Developer South CoastRobert Nicholson
 
Introducing MQ Light - IBM Interconnect 2015 session AME4181
Introducing MQ Light - IBM Interconnect 2015 session AME4181Introducing MQ Light - IBM Interconnect 2015 session AME4181
Introducing MQ Light - IBM Interconnect 2015 session AME4181Robert Nicholson
 
MQ Light in IBM MQ: IBM Interconnect 2015 session AME4182
MQ Light in IBM MQ:  IBM Interconnect 2015 session AME4182MQ Light in IBM MQ:  IBM Interconnect 2015 session AME4182
MQ Light in IBM MQ: IBM Interconnect 2015 session AME4182Robert Nicholson
 
MQ Light for Bluemix - IBM Interconnect 2015 session AME4183
MQ Light for Bluemix - IBM Interconnect 2015 session AME4183MQ Light for Bluemix - IBM Interconnect 2015 session AME4183
MQ Light for Bluemix - IBM Interconnect 2015 session AME4183Robert Nicholson
 
Mq light For Guide Share Europe 2014
Mq light For Guide Share Europe 2014Mq light For Guide Share Europe 2014
Mq light For Guide Share Europe 2014Robert Nicholson
 
Messaging in the Cloud with IBM MQ Light and IBM Bluemix
Messaging in the Cloud with IBM MQ Light and IBM BluemixMessaging in the Cloud with IBM MQ Light and IBM Bluemix
Messaging in the Cloud with IBM MQ Light and IBM BluemixRobert Nicholson
 
Session 1897 messaging in the cloud with elastic mq mq light and bluemix-impa...
Session 1897 messaging in the cloud with elastic mq mq light and bluemix-impa...Session 1897 messaging in the cloud with elastic mq mq light and bluemix-impa...
Session 1897 messaging in the cloud with elastic mq mq light and bluemix-impa...Robert Nicholson
 
IBM IMPACT 2009 Session 3100 - Dynamic Scripting and Rich Web 2.0 Interfaces ...
IBM IMPACT 2009 Session 3100 - Dynamic Scripting and Rich Web 2.0 Interfaces ...IBM IMPACT 2009 Session 3100 - Dynamic Scripting and Rich Web 2.0 Interfaces ...
IBM IMPACT 2009 Session 3100 - Dynamic Scripting and Rich Web 2.0 Interfaces ...Robert Nicholson
 
IBM IMPACT 2009 Conference Session 2024 - WebSphere sMash Integration, PHP wi...
IBM IMPACT 2009 Conference Session 2024 - WebSphere sMash Integration, PHP wi...IBM IMPACT 2009 Conference Session 2024 - WebSphere sMash Integration, PHP wi...
IBM IMPACT 2009 Conference Session 2024 - WebSphere sMash Integration, PHP wi...Robert Nicholson
 
IBM IMPACT 2009 Conference Session 2078 - Extending and Integrating Popular P...
IBM IMPACT 2009 Conference Session 2078 - Extending and Integrating Popular P...IBM IMPACT 2009 Conference Session 2078 - Extending and Integrating Popular P...
IBM IMPACT 2009 Conference Session 2078 - Extending and Integrating Popular P...Robert Nicholson
 
Project Zero JavaOne 2008
Project Zero JavaOne 2008Project Zero JavaOne 2008
Project Zero JavaOne 2008Robert Nicholson
 
Project Zero Php Quebec
Project Zero Php QuebecProject Zero Php Quebec
Project Zero Php QuebecRobert Nicholson
 

Mehr von Robert Nicholson (16)

IBM Cloud Integration Platform Introduction - Integration Tech Conference
IBM Cloud Integration Platform Introduction - Integration Tech ConferenceIBM Cloud Integration Platform Introduction - Integration Tech Conference
IBM Cloud Integration Platform Introduction - Integration Tech Conference
 
IBM Hybrid Integration Platform
IBM Hybrid Integration PlatformIBM Hybrid Integration Platform
IBM Hybrid Integration Platform
 
IBM Interconnect 2016 - Hybrid Cloud Messaging
IBM Interconnect 2016 - Hybrid Cloud MessagingIBM Interconnect 2016 - Hybrid Cloud Messaging
IBM Interconnect 2016 - Hybrid Cloud Messaging
 
Platform as a Service - CloudFoundry and IBM Bluemix - Developer South Coast
Platform as a Service - CloudFoundry and IBM Bluemix - Developer South CoastPlatform as a Service - CloudFoundry and IBM Bluemix - Developer South Coast
Platform as a Service - CloudFoundry and IBM Bluemix - Developer South Coast
 
Introducing MQ Light - IBM Interconnect 2015 session AME4181
Introducing MQ Light - IBM Interconnect 2015 session AME4181Introducing MQ Light - IBM Interconnect 2015 session AME4181
Introducing MQ Light - IBM Interconnect 2015 session AME4181
 
MQ Light in IBM MQ: IBM Interconnect 2015 session AME4182
MQ Light in IBM MQ:  IBM Interconnect 2015 session AME4182MQ Light in IBM MQ:  IBM Interconnect 2015 session AME4182
MQ Light in IBM MQ: IBM Interconnect 2015 session AME4182
 
MQ Light for Bluemix - IBM Interconnect 2015 session AME4183
MQ Light for Bluemix - IBM Interconnect 2015 session AME4183MQ Light for Bluemix - IBM Interconnect 2015 session AME4183
MQ Light for Bluemix - IBM Interconnect 2015 session AME4183
 
Mq light For Guide Share Europe 2014
Mq light For Guide Share Europe 2014Mq light For Guide Share Europe 2014
Mq light For Guide Share Europe 2014
 
MQ Light for WTU
 MQ Light for WTU MQ Light for WTU
MQ Light for WTU
 
Messaging in the Cloud with IBM MQ Light and IBM Bluemix
Messaging in the Cloud with IBM MQ Light and IBM BluemixMessaging in the Cloud with IBM MQ Light and IBM Bluemix
Messaging in the Cloud with IBM MQ Light and IBM Bluemix
 
Session 1897 messaging in the cloud with elastic mq mq light and bluemix-impa...
Session 1897 messaging in the cloud with elastic mq mq light and bluemix-impa...Session 1897 messaging in the cloud with elastic mq mq light and bluemix-impa...
Session 1897 messaging in the cloud with elastic mq mq light and bluemix-impa...
 
IBM IMPACT 2009 Session 3100 - Dynamic Scripting and Rich Web 2.0 Interfaces ...
IBM IMPACT 2009 Session 3100 - Dynamic Scripting and Rich Web 2.0 Interfaces ...IBM IMPACT 2009 Session 3100 - Dynamic Scripting and Rich Web 2.0 Interfaces ...
IBM IMPACT 2009 Session 3100 - Dynamic Scripting and Rich Web 2.0 Interfaces ...
 
IBM IMPACT 2009 Conference Session 2024 - WebSphere sMash Integration, PHP wi...
IBM IMPACT 2009 Conference Session 2024 - WebSphere sMash Integration, PHP wi...IBM IMPACT 2009 Conference Session 2024 - WebSphere sMash Integration, PHP wi...
IBM IMPACT 2009 Conference Session 2024 - WebSphere sMash Integration, PHP wi...
 
IBM IMPACT 2009 Conference Session 2078 - Extending and Integrating Popular P...
IBM IMPACT 2009 Conference Session 2078 - Extending and Integrating Popular P...IBM IMPACT 2009 Conference Session 2078 - Extending and Integrating Popular P...
IBM IMPACT 2009 Conference Session 2078 - Extending and Integrating Popular P...
 
Project Zero JavaOne 2008
Project Zero JavaOne 2008Project Zero JavaOne 2008
Project Zero JavaOne 2008
 
Project Zero Php Quebec
Project Zero Php QuebecProject Zero Php Quebec
Project Zero Php Quebec
 

KĂźrzlich hochgeladen

"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vĂĄzquez
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Zilliz
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelDeepika Singh
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...apidays
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024The Digital Insurer
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 

KĂźrzlich hochgeladen (20)

"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 

IBM Cloud Integration Platform High Availability - Integration Tech Conference

  • 1. Integration Technical Conference 2019 E03: High Availability for Cloud Integration Platform Rob Nicholson rob_nicholson@uk.ibm.com
  • 2. Integration Technical Conference 2019 2 Cloud Integration Platform High Availability Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the High availability considerations for the container platform. Based on this we then explore HA for each Integration service. Agenda: • Overview of Cloud Integration Platform Architecture. • Kubernetes and IBM Cloud Private High Availability Concerns. • Deployment considerations • MQ High Availability • Event Streams High Availability • API Connect High Availability • App Connect High Availability • Aspera High Availability
  • 3. 3IntegrationTechnical Conference 2019 Cloud Integration PlatformArchitecture ICP Services IAM Logging Monitoring Metering Integration Services MQ Event Streams App Connect DataPower API Connect Aspera Cloud Integration Platform Management UI ICP Services UI Components Integration Services UI Components Kubernetes Master/Proxy Nodes Management Nodes Worker Nodes Worker Nodes Worker Nodes • Cloud Integration Platform (CIP) deploys and manages instances of Integration Services running on a Kubernetes Infrastructure • Instances of Integration services are deployed individually as needed to satisfy each use-case. • Services can be deployed in Highly Available topologies across multiple Kubernetes worker nodes or non HA on a single worker. • Deployment and management is via the UI or CLI allowing integration with CI/CD pipelines. • CIP Leverages the IBM Cloud Private (ICP) services which run on dedicated master/proxy and Management nodes in HA or non HA configurations. • The Management UI unifies the management UIs of the Integration Services and the ICP services.
  • 4. Integration Technical Conference 2019 4 Container Platform Availability • Cloud Integration Platform is based on ICP Foundation. • ICP Foundation is functionally identical to ICP Cloud Native edition. The only difference is licensing. • All HA, sizing and deployment information provided for ICP Cloud Native edition applies to Cloud Integration Platform. • This presentation provides a summary. For more detail consult ICP Cloud Native edition documentation, articles and education. See the links page at the end of this presentation.
  • 5. 5IntegrationTechnical Conference 2019 Node Types • A kubernetes Node is a VM or bare metal machine which is part of a Kubernetes cluster. • Master Nodes run the services that control the cluster including the etcd database that stores the current state of the cluster. • Proxy Nodes transmit external requests to the services created inside your cluster. • Management Nodes host management services such as monitoring, metering, and logging. • Worker Nodes run Integration Services. If Cloud Integration Platform is running in a cluster with other workloads then these also run on the worker nodes. • Master, Proxy and Management node types can be combined together but it is recommended in production clusters to keep them separate so that excessive workload does not impact stability. • The minimum theoretical configuration is therefore one Master/Proxy/Management node and one Worker node. This would not be Highly available and is unlikely to have enough CPU to be usable for more than limited demos. Note: There are other more obscure node types such as dedicated etcd nodes and vulnerability advisor nodes but these are beyond the scope of this presentation Cloud Integration Platform Cluster Node types Master Worker Proxy Mngmt Node Cloud Integration Platform Theoretical minimum cluster Master/ Proxy/ Mngmt Worker
  • 6. 6IntegrationTechnical Conference 2019 Container Platform Availability • To make the solution fully Highly Available, each component must be deployed in HA topology. • Master Nodes contain software that uses a quorum* paradigm for high availability and so these must be deployed as an odd number of nodes. Typically either 3 or 5 masters are used in a HA cluster depending on the size of the cluster and the type of load. • Proxy Nodes do not require a quorum so 2 or more Proxy nodes are needed for HA • Management Nodes do not require a quorum so 2 or more Proxy nodes are needed for HA • Worker Nodes run Integration Services. Depending on the integration services required 2 or more worker nodes may be needed for HA. (More detail in subsequent slides) • As previously noted of course Master/Proxy/Management nodes can be combined so a minimal configuration could be 3 Master/Proxy/Management nodes. • A topology that is often used is: 3 Master/Proxies + 2 Management + 3 Workers. Master Master Master Proxy Proxy Mngmt Node Mngmt Node Worker Worker Cloud Integration Platform HA Cluster Master/ Proxy/ Mngmt Cloud Integration Platform minimal HA Cluster Master/ Proxy/ Mngmt Master/ Proxy/ Mngmt Worker Worker
  • 7. Taken from ICP Think 2019 session. 7Think 2019 / 5964A / Feb 15, 2019 / Š 2019 IBM Corporation Machine Role Number vCPU (>= 2.4 GHz) Memory Disk Space Comment Master 3 16 32GB 500GB 3 for HA Management 2 16 32GB 500GB 2 for HA Proxy 2 4 16GB 400GB 2 for HA Vulnerability Advisor 1 8 32GB 500GB Optional (none-HA) Worker Nodes 2-50 8 32GB 400GB A typical production environment http://ibm.biz/icpcapacityplan
  • 8. Integration Technical Conference 2019 8 Deployment Considerations • How many independent clusters do you need? • Enterprises deploy multiple clusters for any/all of the following reasons: • Geographic availability – Ability to survive a regional outage • To separate organizations • To separate development, test and production. • To guard against errors by individual operators. • Are the Kubernetes nodes deployed into separate failure domains? • Separate Availability zones in a public cloud. • Separate physical servers/racks in a data centre • What are the common points of failure? • See https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozon
  • 9. Example – HA Cluster in AWS Leverage available zone – Master/mgmt/va across available zone – User application across available zone AWS ALB/NLB – Load balancer for management control plane – Load balancer for user application – Security group to control network access EBS as persistent storage May 28, 2019 ICP Solutioning Guide 101 | IBM Confidential | IBM Cloud Solutioning Centers Think 2019 / 5964A / Feb 15, 2019 / Š 2019 IBM Corporation Article: http://ibm.biz/icponaws Documentation: https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.1.2/supported_environments/aws/overview.html links to a quickstart: https://aws.amazon.com/quickstart/architecture/ibm-cloud-private/
  • 10. 10IntegrationTechnical Conference 2019 Public Cloud Deployment • ICP 3.1.1 is supported on IBM Cloud and Amazon. • Current CIP is based on ICP 3.1.1 • ICP 3.1.2 adds support for azure Azure • On public clouds it is recommended to use the cloud storage solution. • IBM Block Storage • Amazon EBS • Distribute cluster across 3 Availability zones with less than 30ms latency. • Multiple AZs in single region. Not multiple regions.
  • 11. Integration Technical Conference 2019 11 High Availability for Integration services. Integration Services run on the worker nodes. The Cloud Integration SolutionPak is composed from the CloudPaks making from the component products. IBM cloudpaks are developed by the product development teams to embody best practice for deploying the product onto kubernetes as a secure, scalable Highly Available deployment. Thus, at a high level, all that has to be ensured to provide a highly available deployment is: • There are sufficient nodes for the solution* • The nodes are deployed into separate failure domains so that a single failure will not take out multiple nodes. ** Kubernetes takes care of: • Running appropriate numbers of instances of the solutionpak, spread across the available workers. • Mixing together the workloads on the available worker nodes taking account the resources they need. • Restarting workloads that fail and recovering from failed nodes by scheduling workloads onto alternative nodes. * Some of the component products require 2 worker nodes for active/standby syle HA and others require 3 or more workers for quorum style deployment. ** In larger clusters the nodes should be spread between 3 ore more availability zones. Master Master Master Proxy Proxy Mngmt Node Mngmt Node Worker Worker Cloud Integration Platform HA Cluster Worker
  • 12. 12IntegrationTechnical Conference 2019 Myth Busting Master Master Master Proxy Proxy Mngmt Node Mngmt Node Worker Worker Cloud Integration Platform HA Cluster Worker • MYTH 1: I need separate worker nodes for each Integration Service • MYTH 2: I cannot run multiple Solutionpaks on the same nodes as ‘CloudNative’ workloads. • REALITY: Kubernetes will schedule pods across the available nodes based on its scheduling policies and the declared resource requirements. This will almost certainly mix workloads together on nodes unless you explicitly prevent it. • Its possible to use taints and annotations to control which nodes a workload is scheduled to. (Advanced topic not covered here)
  • 13. Integration Technical Conference 2019 13 Summary - High Availability for Integration. Product Approach(es) to HA Minimum number of worker nodes in the the cluster MQ Data Availability – Failover Service availability – Active/Active 2 Event Streams Quorum 3 APIC Quorum 3 App Connect Stateless – Active/active Stateful – Failover 2 Aspera Quorum 3 Datapower Quorum 3 At a high level. All you need to do is deploy the cluster in a HA configuration with at least 3 workers. Deploy the MQ services from the helm charts and they will be highly available by default.
  • 14. Integration Technical Conference 2019 14 Cloud Integration Platform High Availability Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the High availability considerations for the container platform. Based on this we then explore HA for each Integration service. Agenda: • Overview of Cloud Integration Platform Architecture. • Kubernetes and IBM Cloud Private High Availability Concerns. • Deployment considerations • MQ High Availability • Event Streams High Availability • API Connect High Availability • App Connect High Availability • Aspera High Availability
  • 15. Integration Technical Conference 2019 15 IBM MQ High Availability Concepts Availability of the ’MQ Service’. • The “MQ Service” is available when applications can connect to an MQ queue and send and receive messages. • The MQ service can be made highly available by running multiple queue managers in active/active configurations. Availability of individual messages. • An individual message that has been sent to an MQ Queue is Available when a client can receive that particular message. • The data can be made highly available by minimizing the time when the actual queue manager holding the message is not available.
  • 16. IBM MQ High Availability Options on Distributed Servers HA Storage Local Storage Local Storage Local Storage Single resilient queue manager Multi-instance queue manager Replicated data queue manager HA Storage MQ MQ MQ (Stand-by) MQ MQ (Stand-by) MQ (Stand-by) Platform Load Balancing Platform HA Client load balancing IBM MQ Product HA Client or Server load balancing IBM MQ Product HA
  • 17. IBM MQ High Availability in Cloud Pak / CIP HA Storage Local Storage Local Storage Local Storage Single resilient queue manager Multi-instance queue manager Replicated data queue manager HA Storage MQ MQ MQ (Stand-by) MQ MQ (Stand-by) MQ (Stand-by) Platform Load Balancing Platform HA Current Approach Client load balancing IBM MQ Product HA Client or Server load balancing IBM MQ Product HA Investigating use of multi-instance within Kubernetes Technology not supported in Kubernetes
  • 18. Container Orchestration Single Resilient Queue Manager on Kubernetes • Container restarted by Kubernetes* • Data persisted to external storage • Restarted container connects to existing storage • IP Gateway routes traffic to the active instance • Implemented as Kubernetes Stateful set of 1 • Implications: Both the service and the messages stored in the single queue manager are unavailable during failover IP Gateway (from container platform) HA Network Storage Application * Total Kubernetes node failure considerations described in https://developer.ibm.com/messaging/2018/05/02/availability-scalability-ibm-mq-containers/ MQ
  • 19. Improving the MQ Service availability Hybrid Cloud / March 2018 / Š 2018 IBM Corporation Connection Routing Application Connection Routing Application Single resilient container Multiple resilient containers  Provide access to store messages, even in failover situations. Connection Routing provided to distribute the traffic across the available instances.  Applications retrieving messages attach to the individual Queue Manager. Connection Routing provided to route traffic to container location. Application Application Application MQ MQ MQ Connection Routing Connection Routing Connection Routing ApplicationApplicationApplicationApplication MQ GET MQ PUT MQ PUT MQ PUT MQ PUT MQ PUT MQ GET MQ GET MQ GET
  • 20. Horizontal Scaling MQMQMQ Application Application Connection Routing Messaging Layer Application Network Connection Container(s) Connection Routing Scale the MQ instances based on workload requirements Application ApplicationApplication Connection Routing Application Connection Routing Application Connection Routing Application MQ PUT MQ PUT MQ PUT MQ PUT MQ PUT MQ GET MQ GET MQ GET
  • 21. Connection Routing Hybrid Cloud / March 2018 / Š 2018 IBM Corporation Static Routing Client Connection Definition Table Load Balancer Client embeds endpoints Performance impact when primary unavailable Brittle configuration No load balancing Client references endpoints Enhanced Workload Management Strategies Central configuration management Client references endpoints Enhanced Workload Management Strategies Central configuration management Not recommended for JMS Endpoint List Application MQ MQ CCDT Application MQ MQ IP Gateway Application MQ MQ
  • 22. Recommended Routing MQMQMQ Application Application IP Gateway Messaging Layer Application Network Connection Container(s) Connection Routing Application ApplicationApplication IP Gateway Application IP Gateway Application IP Gateway Application MQ PUT MQ PUT MQ PUT MQ PUT MQ PUT MQ GET MQ GET MQ GET CCDT CCDT CCDT CCDT CCDT IP Gateway IP Gateway
  • 23. Integration Technical Conference 2019 23 Cloud Integration Platform High Availability Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the High availability considerations for the container platform. Based on this we then explore HA for each Integration service. Agenda: • Overview of Cloud Integration Platform Architecture. • Kubernetes and IBM Cloud Private High Availability Concerns. • Deployment considerations • MQ High Availability • Event Streams High Availability • API Connect High Availability • App Connect High Availability • Aspera High Availability
  • 24. 24IntegrationTechnical Conference 2019 Event Streams High Availability • Event streams deploysApache Kafka in a HA topology by default. • Chart deploys brokers as a stateful set with 3 members by default. • Kubernetes will schedule the pods to different nodes. • Kafka’s architecture is inherently HA. Client connection protocol handles discovery and failure. • Default forTopics creation is 3 replicas with min in-sync copies=2 • For Multi-AZ deployments number of brokers must match number of AZs today. 24 Client Intelligent client discovery and reconnection logic Client
  • 25. Integration Technical Conference 2019 25 Cloud Integration Platform High Availability Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the High availability considerations for the container platform. Based on this we then explore HA for each Integration service. Agenda: • Overview of Cloud Integration Platform Architecture. • Kubernetes and IBM Cloud Private High Availability Concerns. • Deployment considerations • MQ High Availability • Event Streams High Availability • API Connect High Availability • App Connect High Availability • Aspera High Availability
  • 26. 26IntegrationTechnical Conference 2019 API Connect High Availability ICP Cluster Worker Node 1 Worker Node 2 Worker Node 3 Gateway Analytics Portal Manager • API Connect requires 3Worker nodes for HA. • Chart deploys as HA by default:  global setting: mode = standard (dev mode is non HA)  Cassandra cluster size = 3  Gateway replica count = 3 • Kubernetes distributes the resulting resources across 3 (or more) nodes on the cluster for high availability. API Connect whitepaper discusses multi-cluster, multi data centre and DR topics. http://ibm.biz/APIC2018paper
  • 27. Integration Technical Conference 2019 27 Cloud Integration Platform High Availability Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the High availability considerations for the container platform. Based on this we then explore HA for each Integration service. Agenda: • Overview of Cloud Integration Platform Architecture. • Kubernetes and IBM Cloud Private High Availability Concerns. • Deployment considerations • MQ High Availability • Event Streams High Availability • API Connect High Availability • App Connect High Availability • Aspera High Availability
  • 28. 28IntegrationTechnical Conference 2019 App Connect HighAvailability • ACE Control plane is HA (replicaset of 3) by default. • Integration servers deployed without local MQ Queue Managers (QM) are stateless.  Deployed HA (replicaset of 3) by default.  BAR file is retrieved from control plane at startup. • Integration servers deployed with a local QM are deployed like MQ.  Single Resilient Queue Manager (stateful set of 1). ICP Cluster Worker Node 1 Worker Node 2 Worker Node 3 ACE Control plane ACE Stateless servers ACE Stateful servers CP CP CP ACE Server ACE Server ACE Server ACE + MQ
  • 29. Integration Technical Conference 2019 29 Cloud Integration Platform High Availability Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the High availability considerations for the container platform. Based on this we then explore HA for each Integration service. Agenda: • Overview of Cloud Integration Platform Architecture. • Kubernetes and IBM Cloud Private High Availability Concerns. • Deployment considerations • MQ High Availability • Event Streams High Availability • API Connect High Availability • App Connect High Availability • Aspera High Availability
  • 30. 30IntegrationTechnical Conference 2019 Aspera HSTS • Aspera HSTS has an architecture that can be deployed Highly Available. • Aspera HSTS in CIP GA code cannot be deployed as Highly available directly from the helm chart. • The deployment can be modified to make it highly available.
  • 31. 31IntegrationTechnical Conference 2019 Aspera HSTS – Routing HA Aspera HSTS serves two types of traffic • HTTPS API traffic • This traffic is routed via ICP’s Ingress Controller, with High Availability
  • 32. 32IntegrationTechnical Conference 2019 Aspera HSTS – Routing HA • FASP Transfer traffic • This traffic is routed by custom proxies built by Aspera • These can be scaled horizontally as a ReplicaSet and are redundantly available • Proxies co-ordinate with Load Balancer to find available transfer pods. • Load Balancer is stateless and can be as ReplicaSet
  • 33. 33IntegrationTechnical Conference 2019 Aspera HSTS – Application HA Aspera Node API is a stateless HTTP server backed by an HA Redis Stateful Set
  • 34. 34IntegrationTechnical Conference 2019 Aspera HSTS – Application HA • Aspera Transfer Pods are each dedicated to a single transfer session • Swarm Service manages Transfer Pods– auto-scaling as needed and ensuring free pods are available at all times • If the Swarm Service dies, no existing traffic will be impacted, but no new transfer pods will be scheduled until it is restarted • Aspera Swarm Service is currently deployed as a single instance • Swarm Service will be adding leader election as part of L2 Certification
  • 35. 35IntegrationTechnical Conference 2019 Aspera HSTS – Control Plane • Stats Service consumes transfer events from Kafka and publishes pod availability for consumption by Aspera Load Balancer • Stats Service is currently a single instance, but will be updated to use Leader Election as part of L2 Certification • If Stats Service fails, load-balancing will be based on stale information until Stats Service restarts • Kafka is provided by IBM Event Streams, which is an HA service

Hinweis der Redaktion

  1. As a role of thumb memory requirement is 4X vCPU requirements. For HA, master nodes require a quorum so it should be an odd number while management nodes and proxy nodes do not require a quorum The workload (application and middleware) sizing determines the total capacity requirement and the number of worker nodes is derived from that
  2. When thinking about the options for IBM MQ high availability on a cloud platform, initially you may consider the following: Single Resilient Queue Manager: this is where you have a single instance of a queue manager, and the cloud environment monitors it and replaces the VM or container as necessary.  From a container point of view this requires persistent storage, which is likely to be HA storage to overcome a host machine failure. Container failover times are typically fast, and comparable to a multi-instance queue manager, while those for other technologies varies. Client connections will be routed to the active instance by an external load balancing mechanism, which removes the need for the client to provide this load balancing. Some examples of these technologies include: Docker, Kuberenetes, VMWare HA / SRM and AWS Autoscaling. Multi-instance queue manager: This is the “traditional” out of the box, high availability provided by IBM MQ. Multi-instance queue managers are instances of the same queue manager configured on different servers. One instance of the queue manager is defined as the active instance and another instance is defined as the standby instance. If the active instance fails, the multi-instance queue manager starts automatically on the standby server. Shared storage is used for the queue manager data, so either instance is able to access the data. Normally this shared storage will be HA storage to overcome failure situations. A file lock is used on the storage to determine which of the two possible instances should be active. This HA option does not provide any IP or hostname load balancing, and therefore it is the responsibility of the MQ client to understand the two possible locations where the MQ instance could be running. Replicated data queue manager: This is a relatively recent addition to the MQ Advanced product (introduced in 9.0.4) and is based on technology use in the MQ Appliance. In the simplest of situations it will include three standard RedHat Linux servers, one being active handling requests, with the other two replicating the data, and waiting to become active in a similar logical manner to the standby instance within a Multi-Instance Queue Manager. When comparing to the other options there are three aspects to highlight: No shared disk: each individual server includes a complete replica of the Queue Manager data, removing the need for a share disk, which simplifies the configuration, and potentially improves the performance of the overall solution. Floating IP: with Multi-Instance Queue Managers, two servers, with separate IP addresses could be hosting and running the active instance of the Queue Manager. This means that the client needs to be explicitly aware of the two IP addresses. With the new replicated data queue manager a floating IP address is used, and associated with the server that is running the active queue manager. This means that clients only need to be aware of the floating IP address, simplifying the logic on the client.
  3. When thinking about the options for IBM MQ high availability on a cloud platform, initially you may consider the following: Single Resilient Queue Manager: this is where you have a single instance of a queue manager, and the cloud environment monitors it and replaces the VM or container as necessary.  From a container point of view this requires persistent storage, which is likely to be HA storage to overcome a host machine failure. Container failover times are typically fast, and comparable to a multi-instance queue manager, while those for other technologies varies. Client connections will be routed to the active instance by an external load balancing mechanism, which removes the need for the client to provide this load balancing. Some examples of these technologies include: Docker, Kuberenetes, VMWare HA / SRM and AWS Autoscaling. Multi-instance queue manager: This is the “traditional” out of the box, high availability provided by IBM MQ. Multi-instance queue managers are instances of the same queue manager configured on different servers. One instance of the queue manager is defined as the active instance and another instance is defined as the standby instance. If the active instance fails, the multi-instance queue manager starts automatically on the standby server. Shared storage is used for the queue manager data, so either instance is able to access the data. Normally this shared storage will be HA storage to overcome failure situations. A file lock is used on the storage to determine which of the two possible instances should be active. This HA option does not provide any IP or hostname load balancing, and therefore it is the responsibility of the MQ client to understand the two possible locations where the MQ instance could be running. Replicated data queue manager: This is a relatively recent addition to the MQ Advanced product (introduced in 9.0.4) and is based on technology use in the MQ Appliance. In the simplest of situations it will include three standard RedHat Linux servers, one being active handling requests, with the other two replicating the data, and waiting to become active in a similar logical manner to the standby instance within a Multi-Instance Queue Manager. When comparing to the other options there are three aspects to highlight: No shared disk: each individual server includes a complete replica of the Queue Manager data, removing the need for a share disk, which simplifies the configuration, and potentially improves the performance of the overall solution. Floating IP: with Multi-Instance Queue Managers, two servers, with separate IP addresses could be hosting and running the active instance of the Queue Manager. This means that the client needs to be explicitly aware of the two IP addresses. With the new replicated data queue manager a floating IP address is used, and associated with the server that is running the active queue manager. This means that clients only need to be aware of the floating IP address, simplifying the logic on the client.
  4. In a container environment, a degree of high availability is provided automatically by docker and container orchestration. A container can be automatically restarted in the case of a failure, or if the container is detected to be unhealthy. In this case the container is terminated and destroyed, and a new container created and started, that will logically take over the function of the previous container. The container orchestration platform will provide an IP Gateway to allow the routing of requests to the location of the active container. Any data that the container needs to persist will be stored on shared storage, so any running container can attach (the platform assures that only one container will be running at any one time).
  5. It is important to separately consider the availability to PUT a message, from GETting (retrieving) a message.  Many use cases are wanting to assure an application is able to offload work (PUT a message), and therefore has a higher level of availability compared to the ability to GET (retrieve) the message. We call this higher level of availability for offloading work, continuous availability. Here we create at least two instances of MQ which both have queues that can receive an applications message. Routing of inbound connections to store messages is then provided across the available MQ instances. We will discuss later the options for this routing. For applications retrieving messages, these connections will be directly to the MQ instances, instead of using the routing capability across MQ instances. This is due to the need to connect to the MQ instance hosting the individual queue to retrieve messages.
  6. As you may have noticed on the previous slide, we have also completed horizontal scaling of the queue managers, we can continue this scaling with 3, 4, … instances of queue managers to support the level of demand required, using the previous pattern.
  7. In the previous charts we included the term “Connection Routing”, this provides the ability for the application to have its MQ traffic directed to one of multiple possible network destinations. There are three typical options: Static Routing - The simplest way of removing the restriction of a connection to a single address is to use a comma-separated list of host names and their ports. This list can be set in connection factories or in the MQSERVER environment variable. This approach can be used simply to find a single queue manager that might be running at one of multiple possible network locations, or distribute traffic across multiple queue managers that offer the same queue. There are several considerations with this technique including: The further down the list the client must go, the longer the connection takes, so the list should be ordered with the most likely available locations first. The first available queue manager in the list receives all the connections, not making it a good match for workload balancing. It is possible to provide lists in different orders to different applications to have some level of balancing of connections, but there are better mechanisms. The configuration is brittle, in that changes need to occur to the individual application configuration if and when servers are moved. Client Connection Definition Table – The CCDT is a file that determines the connections information used by a client to connect to a Queue Manager. A CCDT file can contain multiple entries, for a single logical connection, allowing it to distribute traffic across a number of queue managers. The CCDT can be configured so that channels in a group are either sequentially tried (for availability), or randomly tried based on specified weightings (for workload balancing). This provides greater flexibility than defining the connection information statically. The CCDT file can also be retrieved from a central location using HTTP or FTP, meaning updates can occur once and picked up by all applications. This central management provides a key advance over the static routing mechanism. Load Balancer – the load balancer could be provided by the container orchestration engine, or be a standard network load balancer such as an F5. Regardless it can provide TCP load balancing in front of the IBM MQ Queue Managers. This approach removes the application from making the connection selection, and instead will forward onto the load balancer IP and port. Often load balancers have the capability for more feature rich workload management strategies, that may make it more appealing than CCDT. Although there are certain capabilities such as JMS that are not recommended with an external load balancer, so these need to be considered when selecting an effective strategy.