SlideShare ist ein Scribd-Unternehmen logo
1 von 70
Downloaden Sie, um offline zu lesen
Integration Technical Conference 2019
Build a truly fault tolerant and
scalable IBM MQ solution
David Ware
Chief Architect, IBM MQ
© 2019 IBM Corporation
Fault Toleration
2
Queue Manager
Single
Queue Manager Queue Manager Queue Manager Queue Manager
Multiple
100%
0%
100%
0%
availabilityavailability
More is better for availability
© 2019 IBM Corporation
Queue Manager Queue Manager Queue Manager
It’s not just the queue managers…
3
Client Client Client
Step 1
Horizontally scale the application into multiple instances, all
performing the same role
A queue manager works better when there are multiple
applications working in parallel Queue Manager
Step 2
Horizontally scale the queue managers
Create multiple queue managers with the ‘same’ configuration
Distribute the application instances across the queue managers
Client Client Client Client Client Client
Queue Manager Queue Manager Queue Manager
© 2019 IBM Corporation
Scaling
4
Single Multiple
Queue Manager
Queue Manager Queue Manager Queue Manager Queue Manager Queue Manager
Queue Manager Queue Manager Queue Manager Queue Manager
x1x4 x1x2x3x4x5x6x7x8xn
More is better for scalability
© 2019 IBM Corporation
5
Let’s go through that
availability thing one step at
a time…
© 2019 IBM Corporation
Node
Single, non-HA queue manager
6
App App App
Queue Manager
100%
0%
100%
0%
System
availability
Message
availability
While the QMgr is down
all of the applications
need to wait for it to be
restarted
All queued messages
require the QMgr to restart
How much messaging
work can proceed
Proportion of queued
message that are available
© 2019 IBM Corporation
Node B Node CNode A
Single HA queue manager
7
App App App
Queue Manager Queue ManagerQueue Manager
Highly available
queue manager
and queue
instances
100%
0%
100%
0%
HA queue managers are
restarted quickly (typically
a ‘few’ seconds)
System
availability
Message
availability
Here we have one active instance of the queue
manager with two replica, standby, instances ready
to take over.
(This happens to be the MQ RDQM HA model, other
solutions like multi-instance queue managers are
subtly different (only one standby) but essentially
the same)
© 2019 IBM Corporation
Node B Node CNode A
Multiple HA queue managers
8
Queue Manager 1
App App App AppAppApp App
Queue Manager 1Queue Manager 1
Highly available
queue manager
and queue
instances
1/3 of message
traffic
Queue Manager 2 Queue Manager 2Queue Manager 2
Highly available
queue manager
and queue
instances
1/3 of message
traffic
Queue Manager 3Queue Manager 3Queue Manager 3
Highly available
queue manager
and queue
instances
1/3 of message
traffic
AppApp
To further increase the
availability you need to
remove the single point of
failure that is a queue
manager.
For this, create multiple
queue managers and stripe
the messaging workload
across them by defining the
“same” queue on all of
them.
Each message is only
queued on a single queue
manager but the
multiple queue managers
mean any one outage is
confined to a subset of the
workload
© 2019 IBM Corporation
Node B Node CNode A
Multiple HA queue managers – ordered consumption
9
Queue Manager 1
App App App AppAppApp App
Queue Manager 1Queue Manager 1
Highly available
queue manager
and queue
instances
1/3 of message
traffic
Queue Manager 2 Queue Manager 2Queue Manager 2
Highly available
queue manager
and queue
instances
1/3 of message
traffic
Queue Manager 3Queue Manager 3Queue Manager 3
Highly available
queue manager
and queue
instances
1/3 of message
traffic
AppApp
Connect: QMgr1 Connect: QMgr2 Connect: QMgr3
100%
0%
100%
0%
Messages that require
ordering need to go to the
same queue manager.
One approach to achieve
this is for application
instances to connect to a
specific queue manager
based on their ordered
stream of messages.
This ensures messages are
processed in order per-
group
While a QMgr is
down a subset of
application need to
wait for it to restart
System
availability
Message
availability
Queued messages are
confined to a single
queue manager.
Therefore, the QMgr
needs to restart to
make those messages
available. Hence the
need to make each
queue manager highly
available
© 2019 IBM Corporation
Node B Node CNode A
Multiple HA queue managers – unordered consumption
10
Queue Manager 1
App App App AppAppApp App
Queue Manager 1Queue Manager 1
Highly available
queue manager
and queue
instances
1/3 of message
traffic
Queue Manager 2 Queue Manager 2Queue Manager 2
Highly available
queue manager
and queue
instances
1/3 of message
traffic
Queue Manager 3Queue Manager 3Queue Manager 3
Highly available
queue manager
and queue
instances
1/3 of message
traffic
AppApp
Connect: QMgrGroup
100%
0%
100%
0%
Application instances
can connect to any
queue manager as the
order that they are
queued across the
multiple instances is not
a concern.
Connecting across a
group can be achieved
with a CCDT queue
manager groupThere is always a
running QMgr for an
application to connect
to for new work
Queued messages are
still confined to a single
queue manager.
Therefore, the QMgr
still needs to restart to
make those particular
messages available.
System
availability
Message
availability
© 2019 IBM Corporation
Single vs. Multiple
11
Single
o Simple
o Invisible to applications
o Limited by maximum system size
o Liable to hit internal limits
o Not all aspects scale linearly
o Restart times can grow
o Every outage is high impact
Multiple
o Unlimited by system size
o All aspects scale linearly
o More suited to cloud scaling
o Reduced restart times
o Enables rolling upgrades
o Tolerate partial failures
o Visible to applications – limitations apply
o Potentially more complicated
Queue Manager Queue Manager Queue Manager Queue Manager
Queue Manager
© 2019 IBM Corporation
uniform cluster…
12
Try to stop thinking about each individual queue manager and start thinking
about them as a cluster
Queue Manager Queue Manager Queue Manager
© 2019 IBM Corporation
13
The fundamentals on MQ Clusters
(skip this if you know it)
© 2019 IBM Corporation
MQ clustering
What MQ Clusters provide :
14
Availability routing
Horizontal scaling of queues
Configuration directory
Dynamic registration and lookup
Dynamic channel management
Dynamic message routing
Foundation
© 2019 IBM Corporation
cluster
FR
directoryQMGRFR
QMGR
QMGR
QMGR
QMGR
QMGR
Cluster management
15
Full repositories learn
everything about the
cluster. These are the
“directory servers” The other queue
managers only learn
about what they need.
These are ”partial
repositories”
© 2019 IBM Corporation
cluster
FR
directoryQMGRFR
QMGR
QMGR
QMGR
QMGR
QMGR
App
?
Queue managers persistently
cache their knowledge of the
cluster resources, limiting
interactions with the full
repositories
Cluster management
16
Full repositories will pass on the details
of cluster queues and the connection
details of the queue manager they are
located on
Details of a clustered queue or
topic are sent to the full
repositories in the cluster
© 2019 IBM Corporation
cluster
FR
directoryQMGRFR
QMGR
QMGR
QMGR
QMGR
QMGR
App
Unused cluster information
times out and is removed
from the local cache
Cluster management
17
© 2019 IBM Corporation
cluster
FR
directoryQMGRFR
QMGR
QMGR
QMGR
QMGR
QMGR
If a queue manager is simply switched
off, rather than exiting the cluster in a
controlled manner, it will eventually be
cleaned from every queue manager’s
cache
Cluster management
18
© 2019 IBM Corporation
19
How to set up a cluster
(skip this too if you know it)
© 2019 IBM Corporation
FR1
Step 1: Create your two full repositories
ALTER QMGR REPOS(‘CLUS1’)
DEFINE CHANNEL(‘CLUS1.FR1’) CHLTYPE(CLUSRCVR)
CLUSTER(‘CLUS1’) CONNAME(FR1 location)
DEFINE CHANNEL(‘CLUS1.FR2’) CHLTYPE(CLUSSDR)
CLUSTER(‘CLUS1’) CONNAME(FR2 location)
FR2
ALTER QMGR REPOS(‘CLUS1’)
DEFINE CHANNEL(‘CLUS1.FR2’) CHLTYPE(CLUSRCVR)
CLUSTER(‘CLUS1’) CONNAME(FR2 location)
DEFINE CHANNEL(‘CLUS1.FR1’) CHLTYPE(CLUSSDR)
CLUSTER(‘CLUS1’) CONNAME(FR1 location)
FR2
FR1
20
© 2019 IBM Corporation
FR1
FR2
QMGR1
DEFINE CHANNEL(‘CLUS1.QMGR1’)
CHLTYPE(CLUSRCVR)
CLUSTER(‘CLUS1’)
CONNAME(QMGR1 location)
DEFINE CHANNEL(‘CLUS1.FR1’)
CHLTYPE(CLUSSDR)
CLUSTER(‘CLUS1’)
CONNAME(FR1 location)
DEFINE QLOCAL(Q1)
CLUSTER(CLUS1)
Q1
FR2
QMGR1
FR1
QMGR1
Q1@QMGR1 FR1
FR2
Step 2: Add in more queue managers
DISPLAY CLUSQMGR(*)
DISPLAY QCLUSTER(*)Q1@QMGR1
21
© 2019 IBM Corporation
FR1
FR2
QMGR1
DEFINE CHANNEL(‘CLUS1.QMGR1’)
CHLTYPE(CLUSRCVR)
CLUSTER(‘CLUS1’)
CONNAME(QMGR1 location)
DEFINE CHANNEL(‘CLUS1.FR1’)
CHLTYPE(CLUSSDR)
CLUSTER(‘CLUS1’)
CONNAME(FR1 location)
DEFINE QLOCAL(Q1)
CLUSTER(CLUS1)
Q1
QMGR2 DEFINE CHANNEL(‘CLUS1.QMGR2’)
CHLTYPE(CLUSRCVR)
CLUSTER(‘CLUS1’)
CONNAME(QMGR2 location)
DEFINE CHANNEL(‘CLUS1.FR1’)
CHLTYPE(CLUSSDR)
CLUSTER(‘CLUS1’)
CONNAME(FR1 location)
FR2
QMGR1
QMGR2
FR1
QMGR1
QMGR2
Q1@QMGR1
Q1@QMGR1 FR1
FR2
FR1
FR2
Step 2: Add in more queue managers
22
© 2019 IBM Corporation
FR1
FR2
QMGR1
Q1
QMGR2
FR2
QMGR1
QMGR2
FR1
QMGR1
QMGR2
Q1@QMGR1
Q1@QMGR1
Client 1
MQOPEN(Q1)
Q1?
FR1
FR2
QMGR1
FR1
FR2
Q1?
Q1@QMGR1
Q1@QMGR1
Step 3: Start sending messages
23
© 2019 IBM Corporation
FR1
FR2
QMGR1
QMGR2
So all you needed…
§ Two full repository queue
managers
§ A cluster receiver
channel each
§ A single cluster sender
each
§ No need to manage pairs
of channels between
each queue manager
combination or their
transmission queues
§ No need for remote
queue definitions
24
© 2019 IBM Corporation
Horizontal scaling with
MQ Clustering
© 2019 IBM Corporation
Horizontal scaling with MQ Clustering
26
Queue Manager Queue Manager Queue Manager
Queue Manager
Earlier we showed how to scale applications
directly across multiple queue managers, with an
MQ Cluster you can do that with queue manager-to-
queue manager message traffic.
A queue manager will typically route messages
based on the name of the target queue
In an MQ Cluster it is possible for multiple queue
managers to independently define the same named
queue
Any queue manager that needs to route messages
to that queue now has a choice…
?
© 2019 IBM Corporation
Channel workload balancing
27
App 1App 1Client
Queue Manager Queue Manager Queue Manager
Queue Manager
• Cluster workload balancing applies when there are
multiple cluster queues of the same name
• Cluster workload balancing will be applied in one of three ways:
• When the putting application opens the queue - bind on open
• When a message group is started - bind on group
• When a message is put to the queue - bind not fixed
• When workload balancing is applied:
• The source queue manager builds a list of
all potential targets based on the queue name
• Eliminates the impossible options
• Prioritises the remainder
• If more than one come out equal, workload balancing ensues …
• Balancing is based on:
• The channel – not the target queue
• Channel traffic to all queues is taken into account
• Weightings can be applied to the channel
• … this is used to send the messages to the chosen target
© 2019 IBM Corporation
• Cluster workload balancing applies when there are
multiple cluster queues of the same name
• Cluster workload balancing will be applied in one of three ways:
• When the putting application opens the queue - bind on open
• When a message group is started - bind on group
• When a message is put to the queue - bind not fixed
• When workload balancing is applied:
• The source queue manager builds a list of
all potential targets based on the queue name
• Eliminates the impossible options
• Prioritises the remainder
• If more than one come out equal, workload balancing ensues …
• Balancing is based on:
• The channel – not the target queue
• Channel traffic to all queues is taken into account
• Weightings can be applied to the channel
• … this is used to send the messages to the chosen target
Channel workload balancing
28
Client
Queue Manager Queue Manager Queue Manager
Queue Manager
Client
© 2019 IBM Corporation
• Cluster workload balancing applies when there are
multiple cluster queues of the same name
• Cluster workload balancing will be applied in one of three ways:
• When the putting application opens the queue - bind on open
• When a message group is started - bind on group
• When a message is put to the queue - bind not fixed
• When workload balancing is applied:
• The source queue manager builds a list of
all potential targets based on the queue name
• Eliminates the impossible options
• Prioritises the remainder
• If more than one come out equal, workload balancing ensues …
• Balancing is based on:
• The channel – not the target queue
• Channel traffic to all queues is taken into account
• Weightings can be applied to the channel
• … this is used to send the messages to the chosen target
Channel workload balancing
29
App 1App 1Client
Queue Manager Queue Manager Queue Manager
Queue Manager
© 2019 IBM Corporation
• Cluster workload balancing applies when there are
multiple cluster queues of the same name
• Cluster workload balancing will be applied in one of three ways:
• When the putting application opens the queue - bind on open
• When a message group is started - bind on group
• When a message is put to the queue - bind not fixed
• When workload balancing is applied:
• The source queue manager builds a list of
all potential targets based on the queue name
• Eliminates the impossible options
• Prioritises the remainder
• If more than one come out equal, workload balancing ensues …
• Balancing is based on:
• The channel – not the target queue
• Channel traffic to all queues is taken into account
• Weightings can be applied to the channel
• … this is used to send the messages to the chosen target
Channel workload balancing
30
App 1App 1Client
Queue Manager Queue Manager Queue Manager
Queue Manager
Tip: By default, a matching queue on the same
queue manager that the application is connected
to will be prioritized over all others for speed.
To overcome that, look at CLWLUSEQ
© 2019 IBM Corporation
Horizontal scaling – do I really need MQ Clustering?
31
App 1App 1Client
Service
Scaled out applications
App 1App 1Client
Service
App 1App 1Client
Service
Queue Manager Queue Manager Queue Manager
Q. Is clustering required?
A. Maybe, maybe not …
Service
Single producing application
Client
ServiceService
Queue Manager Queue Manager Queue Manager
Q. Is clustering required?
A. Definitely
Q. Is clustering required?
A. Definitely
ServiceService
¸
App 1App 1Client
Gateway routing
Service
Queue Manager Queue Manager Queue Manager
Queue Manager
© 2019 IBM Corporation
Back to the uniform cluster
© 2019 IBM Corporation
Building scalable, fault tolerant, solutions
Many of you have built your own continuously
available and horizontally scalable solutions over
the years
Let’s call this the “uniform cluster” pattern
MQ has provided you many of the building blocks -
Client auto-reconnect
CCDT queue manager groups
But you’re left to solve some of the problems,
particularly with long running applications -
Efficiently distributing your applications
Ensuring all messages are processed
Maintaining availability during maintenance
Handling growth and contraction of scale
App App App
decoupled
AppApp
33
© 2019 IBM Corporation
Uniform Cluster
34
MQ 9.1.2 started to make that easier
For the distributed platforms, declare a set of
matching queue managers to be following the
uniform cluster pattern
All members of an MQ Cluster
Matching queues are defined on every queue manager
Applications can connect as clients to every queue
manager
MQ will automatically share application connectivity
knowledge between queue managers
The group will use this knowledge to automatically
keep matching application instances balanced
across the queue managers
Matching applications are based on application name
(new abilities to programmatically define this)
MQ 9.1.2 is started to roll out the client support for
this
IBM MQ 9.1.2 CD
Application awareness
https://developer.ibm.com/messaging/2019/03/21/building-scalable-fault-tolerant-ibm-mq-systems/
© 2019 IBM Corporation
Application awareness
35
App App
Automatic Application balancing
Application instances can initially connect to any member of
the group
We recommend you use a queue manager group and
CCDT to remove any SPoF
Every member of the uniform cluster will detect an
imbalance and request other queue managers to donate
their applications
Hosting queue managers will instigate a client auto-
reconnect with instructions of where to reconnect to
Applications that have enabled auto-reconnect will
automatically move their connection to the indicated queue
manager
Client support has been increased over subsequent CD
releases. 9.1.2 CD started with support for C-based
applications, 9.1.3 CD added JMS …
App App App App
IBM MQ 9.1.2 CD
https://developer.ibm.com/messaging/2019/03/21/building-scalable-fault-tolerant-ibm-mq-systems/
© 2019 IBM Corporation36
App App
Automatic Application balancing
Automatically handle rebalancing following planned and
unplanned queue manager outages
Existing client auto-reconnect and CCDT queue
manager groups will enable initial re-connection on
failure
Uniform Cluster rebalancing will enable automatic
rebalancing on recovery
App App App App
IBM MQ 9.1.2 CD
https://developer.ibm.com/messaging/2019/03/21/building-scalable-fault-tolerant-ibm-mq-systems/
© 2019 IBM Corporation
Automatic Application balancing
37
App App App App App App
IBM MQ 9.1.2 CD
Even to horizontally scale out a queue
manager deployment
Simply add a new queue manager
to the uniform cluster
The new queue manager will
detect an imbalance of
applications and request its fair
share
https://developer.ibm.com/messaging/2019/03/21/building-scalable-fault-tolerant-ibm-mq-systems/
© 2019 IBM Corporation
Uniform Cluster features
38
IBM MQ 9.1.2 CD
As well as the automatic rebalancing of the C library based clients, MQ 9.1.2 CD introduced a number of
new or improved features for the distributed platforms that tie together to make all this possible
This means you need both the queue managers and the clients to be the latest MQ version
Creation of a Uniform Cluster
• A simple qm.ini tuning parameter for now
The ability to identify applications by name, to define grouping
of related applications for balancing
• Extends the existing JMS capability to all languages
Auto reconnectable applications
• Only applications that connect with the auto-reconnect option
are eligible for rebalancing
Text based CCDTs to make it easier to configure this behaviour
• And to allow duplicate channel names
TuningParameters:
UniformClusterName=CLUSTER1
$ export MQAPPLNAME=MY.SAMPLE.APP
https://developer.ibm.com/messaging/2019/03/21/walkthrough-auto-application-rebalancing-using-the-uniform-cluster-pattern/
...
Channels:
DefRecon=YES
© 2019 IBM Corporation39
Balancing by application name
Automatic application balancing is based on the
application name alone
Different groups of application instances with different
application names are balanced independently
By default the application name is the executable
name
This has been customisable with Java and JMS
applications for a while
MQ 9.1.2 CD clients have extended this to other
programming languages
For example C, .NET, XMS, …
Application name can be set either programmatically
or as an environment override
App App App App App App
IBM MQ 9.1.2 CD
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.1.0/com.ibm.mq.dev.doc/q132920_.htm
App App App App
App App
© 2019 IBM Corporation
Building scalable and available solutions
JSON CCDT
Build your own JSON format CCDTs
Supports multiple channels of the same name
on different queue managers to simplify the
building of uniform clusters
Available with all 9.1.2 clients
C, JMS, .NET, Node.js, Golang clients
© 2019 IBM Corporation
40
IBM MQ 9.1.2 CD
01100110100101
10001010101101
10101011011011
01001011110111
01110111101111
01110111011
{
“channel”:[
{
“name”:”ABC”,
”queueManager”:”A”
},
{
“name”:”ABC”,
”queueManager”:”B”
},
]
}
© 2019 IBM Corporation
Configuring the CCDT for application
balancing in a Uniform Cluster
To correctly setup a CCDT for application
rebalancing it needs to contain two entries per
queue manager:
• An entry under the name of a queue
manager group
• And entry under the queue manager’s
real name
(These previously would need to be different
channels, but with the JSON CCDT this is
unnecessary)
The application connects using the queue
manager group as the queue manager name
(prefixed with an ‘*’)
© 2019 IBM Corporation
41
IBM MQ 9.1.2 CD
{
"channel":
[
{
"name": ”SVRCONN.CHANNEL",
"type": "clientConnection”
"clientConnection":
{
"connection":
[
{
"host": ”host1",
"port": 1414
}
],
"queueManager": "ANY_QM”
},
},
{
"name": ”SVRCONN.CHANNEL",
"type": "clientConnection”
"clientConnection":
{
"connection":
[
{
"host": ”host2",
"port": 1414
}
],
"queueManager": "ANY_QM”
},
},
…
…
{
"name": ”SVRCONN.CHANNEL",
"type": "clientConnection”
"clientConnection":
{
"connection":
[
{
"host": ”host1",
"port": 1414
}
],
"queueManager": ”QMGR1”
},
},
{
"name": ”SVRCONN.CHANNEL",
"type": "clientConnection”
"clientConnection":
{
"connection":
[
{
"host": ”host2",
"port": 1414
}
],
"queueManager": ”QMGR2”
},
}
]
}
QMGR1QMGR2
© 2019 IBM Corporation
Can I decouple any application?
© 2019 IBM Corporation
Does this work for all applications?
This pattern of loosely coupled applications only works for certain applications styles.
43
- no
Good
Applications that can tolerate being moved from one
queue manager to another without realising and can
run with multiple instances
• Datagram producers and consumers
• Responders to requests, e.g. MDBs
• No message ordering
Bad
Applications that create persistent state across
multiple messaging operations, or require a single
instance to be running
• Requestors waiting for specific replies
• Dependant on message ordering
• Global transactions …
© 2019 IBM Corporation
A new hope for transactions
44
© 2019 IBM Corporation
Global transactions require a single resource
manager to be named when connecting. For
MQ a resource manager is a queue manager.
This prevents the use of queue manager
groups in CCDTs
However, WebSphere Liberty 18.0.0.2 and
MQ 9.1.2 CD support the use of CCDT queue
manager groups when connecting
IBM MQ 9.1.2 CD
App
ConnectionFactory
GROUP
{
“channel”:[
{
“name”:”SVRCONN.QM1”,
”queueManager”:”GROUP”
},
{
“name”:”SVRCONN.QM2”,
”queueManager”:”GROUP”
},
]
}
© 2019 IBM Corporation
Availability routing in an
MQ Cluster
© 2019 IBM Corporation
Clustering for availability
Is MQ Clustering a high availability solution?
46
NO YES
– Having multiple potential targets for any message can improve the availability
of the solution, always providing an option to process new messages.
– A queue manager in a cluster has the ability to route new and old messages
based on the availability of the channels, routing messages to running
queue managers.
– Clustering can be used to route messages to active consuming applications.
Not for the message data.
Each message is only
available from a single queue
manager
Clustering can form a
part of the overall high
availability of the
messaging system
© 2019 IBM Corporation
Channel availability routing
47
• When performing workload balancing, the availability of the
channel to reach the target is a factor
• All things being equal, messages will be routed to those targets
with a working channel
Things that can prevent routing
• Applications targeting messages at a specific queue
manage (e.g. reply message)
• Using “cluster workload rank”
• Binding messages to a target
Routing of messages based on availability doesn’t just
happen when they’re first put, it also occurs for queued
transmission messages every time the channel is retried
So blocked messages can be re-routed, if they’re not
prevented…
ServiceService
App 1App 1Client
Service
Queue Manager Queue Manager Queue Manager
Queue Manager
© 2019 IBM Corporation
Pros and cons of binding
48
Bind context:
Duration of an open
Duration of logical group
• All messages put within the bind context will go
to same target*
• Message order can be preserved**
• Workload balancing logic is only driven at the
start of the context
• Once a target has been chosen it cannot change
• Whether it’s available or not
• Even if all the messages could be redirected
Bind on open Bind on group
Bind context:
None
• Greater availability, a message will be redirected
to an available target***
• Overhead of workload balancing logic for every
message
• Message order may be affected
Bind not fixed
Bind on open is the default
It could be set on the cluster queue (don’t forget
aliases) or in the app
* While a route is known by the source queue manager, it won’t be rebalanced, but it could be DLQd
** Other aspects may affect ordering (e.g. deadletter queueing)
*** Unless it’s fixed for another reason (e.g. specifying a target queue manager)
© 2019 IBM Corporation
Application availability routing
© 2019 IBM Corporation
App 1App 1Client 1
QMgr
QMgr
QMgr
• Cluster workload balancing does not take into account the
availability of receiving applications
• Or a build up of messages on a queue
Service 1
Service 1
Blissful ignorance
This queue manager is unaware of
the failure to one of the service
instances
Unserviced messages
Half the messages will quickly start
to build up on the service queue
Application based routing
50
© 2019 IBM Corporation
Service 1
Service 1
QMgr
QMgr
QMgr
App 1App 1Client 1
Application based routing
51
© 2019 IBM Corporation
App 1App 1Client 1
• MQ provides a sample monitoring service tool, amqsclm
• It regularly checks for attached consuming applications (IPPROCS)
• And automatically adjusts the cluster queue definitions to route messages
intelligently (CLWLPRTY)
• That information is automatically distributed around the cluster
Service 1
Service 1
QMgr
QMgr
QMgr
Moving messages
Any messages that slipped through
will be transferred to an active
instance of the queue
Detecting a change
When a change to the open handles
is detected the cluster workload
balancing state is modifiedSending queue managers
Newly sent messages will be sent to active
instances of the queue
Application based routing
52
FR
© 2019 IBM Corporation
Cluster Queue Monitoring Sample
– amqsclm, is provided with MQ to ensure messages are directed towards the instances of clustered queues
that have consuming applications currently attached. This allows all messages to be processed effectively
even when a system is asymmetrical (i.e. consumers not attached everywhere).
• In addition it will move already queued messages from instances of the queue where no consumers are attached
to instances of the queue with consumers. This removes the chance of long term marooned messages when
consuming applications disconnect.
– The above allows for more versatility in the use of clustered queue topologies where applications are not
under the direct control of the queue managers. It also gives a greater degree of high availability in the
processing of messages.
– The tool provides a monitoring executable to run against each queue manager in the cluster hosting queues,
monitoring the queues and reacting accordingly.
• The tool is provided as source (amqsclm.c sample) to allow the user to understand the mechanics of the tool and
customise where needed.
53
© 2019 IBM Corporation
AMQSCLM Logic
– Based on the existing MQ cluster workload balancing mechanics:
• Uses cluster priority of individual queues – all else being equal, preferring to send messages to instances of queues
with the highest cluster priority (CLWLPRTY).
• Using CLWLPRTY always allows messages to be put to a queue instance, even when no consumers are attached to any
instance.
• Changes to a queue’s cluster configuration are automatically propagated to all queue managers in the cluster that are
workload balancing messages to that queue.
– Single executable, set to run against each queue manager with one or more cluster queues to be monitored.
– The monitoring process polls the state of the queues on a defined interval:
• If no consumers are attached:
– CLWLPRTY of the queue is set to zero (if not already set).
– The cluster is queried for any active (positive cluster priority) queues.
– If they exist, any queued messages on this queue are got/put to the same queue. Cluster workload balancing will
re-route the messages to the active instance(s) of the queue in the cluster.
• If consumers are attached:
– CLWLPRTY of the queue is set to one (if not already set).
– Defining the tool as a queue manager service will ensure it is started with each queue manager
54
© 2019 IBM Corporation
Putting it all together
© 2019 IBM Corporation
Uniform Cluster
56
App App
Bringing it all together
• Build a matching set of queue managers, in the style of a
uniform cluster
• Make them highly available to prevent stuck messages
• Consider adding amqsclm to handle a lack of consumers
• Setup your CCDTs for decoupling applications from
individual queue managers
• Look at the 9.1.2+ application rebalancing capability and
see if it matches your needs
• Connect your applications
App App App App
amqsclm
amqsclm
amqsclm
decoupled
© 2019 IBM Corporation
What about MQ Clusters in
the cloud?
© 2019 IBM Corporation
MQ Clusters and Clouds
Not quite…
58
“Cloud platforms provide all an MQ cluster can do”
Clouds often provide cluster-like capability:
Directory services and routing
Network workload balancing
Great for stateless workload balancing
Can be good for balancing unrestricted clients across multiple
But where state is involved, such as reliably sending messages from one
queue manager to another without risking message loss or duplication,
such routing isn’t enough
That’s still the job of an MQ cluster…
© 2019 IBM Corporation
MQ Clusters and Clouds
Yes, but think it through first…
59
“So will MQ clusters work in a cloud?”
Some things may be different in your cloud:
o A new expectation that queue managers will be created
and deleted more dynamically than before
o Queue managers will “move around” with the cloud
o Your level of control may be relaxed
© 2019 IBM Corporation
Adding and removing queue managers
60
Adding queue managers
o Have a simple clustering topology
o Separate out your full repositories and manage those separately
o Automate the joining of a new queue manager
+
Removing queue managers - this is harder!
o You might have messages on it that you need, how are you going to remove those first?
o If you just switch off the queue manager, it’ll still be known by the cluster for months!
o Is that a problem?
o If routing is based on availability, messages will be routed to alternative queue managers
o Messages will sit on the cluster transmission queues while the queue manager is still known
o It makes your view of the cluster messy
o Automate the cluster removal
o From the deleting queue manager
o Stop and delete the cluster channels – give it a little time to work through to the FRs
o Then from a full repository – as it’ll never be coming back
o RESET the queue manager out of the cluster
_
© 2019 IBM Corporation
Adding and removing queue managers
Adding queue managers
– Have a simple clustering topology
– Separate out your full repositories and manage those separately (there’s no need to be adding and removing them
dynamically)
– Automate the joining of a new queue manager
Removing queue managers
– That’s harder!
– You might have messages on it that you need, how are you going to remove those?
– If you just switch off the queue manager, it’ll still be known by the cluster for months!
– Is that a problem?
• If routing is based on availability, messages will be routed to alternative queue managers
– Unless there’s no other choice (e.g. replies being targeted to this queue manager, or no other queue of the same name in the cluster)
• Messages will sit on the cluster transmission queues while the queue manager is still known
– If it had been removed those messages would have failed, or be moved to a DLQ
• Having defunct queue managers still known in the cluster might make your monitoring and problem determination harder
– Automate the cluster removal
• From the deleting queue manager
– Stop and delete the cluster channels – give it a little time to work through to the FRs
• Or from a full repository – as it’ll never be coming back
– RESET the queue manager out of the cluster
61
© 2019 IBM Corporation
Queue managers will move around
62
o Usually this is not actually a problem, clouds are doing a good job of hiding
that from you
o e.g. Kubernetes “services” are effectively providing DNS for a container
o Check the cloud capabilities available to you
o Probably best to use hostnames in the connection details rather than IP addresses
o Remember that often names/addresses may only be visible from within the cloud
o External addresses may differ, comma separated CONNAMEs can help here
© 2019 IBM Corporation
Control over the cluster
63
Keep it simple
In a potentially self service world, where new applications need their own queue manager, overall
control of configuring everything in the cluster is not desirable. To prevent chaos don’t just pick up
your bespoke, on-prem, cluster topology and put it in the cloud – re-think it:
o Simplify the cluster topology, do you really need all those overlapping clusters?
o Make sure the cluster infrastructure (the full repositories, gateway queue managers) are still under your
control
o Script the joining/leaving a cluster and automate this with the deployment of new queue managers
o A cluster is a shared namespace, enforce naming conventions on cluster queues and topics
o Use your full repositories as a view across the cluster
© 2019 IBM Corporation
Clustering good practices
© 2019 IBM Corporation
Full repositories
65
o Dedicate a pair of queue managers to being the full
repositories for the cluster
o Don’t even think of them as queue managers
o A pair for redundancy obviously, but why not more?
o More than two won’t work how you hope it will…
o A queue manager will randomly pick two full
repositories to work with (manually defining your
sender channels doesn’t guarantee which two)
o If those two are unavailable it won’t go off and
find another one
FR
QMGRFR
© 2019 IBM Corporation
Check your defaults
66
Cluster transmission queues DEFCLXQ
o MQ started with each queue manager having just the one
o It added the ability to automatically split that out into one per cluster channel
o This improves visibility of channel problems
o And reduces the cross-channel impacts of a blocked channel
Cluster workload rank and priority CLWLRANK/CLWLPRTY
o These are used as a tie breaker across multiple queues/channels to determine where messages are sent
o They default to zero (the lowest), so you can raise one to make it more favourable
o But you can’t lower one to make it less favourable
o Consider setting them all to be around the middle (e.g 5) to give yourself options
Default application bind setting DEFBIND
o The default is “on open”
o Great if you need affinity across messages, poor if you value message availability over that
o Consider setting this to “not fixed”
© 2019 IBM Corporation
67
Problem diagnosis
© 2019 IBM Corporation
Problem diagnosis
Cluster queues are not visible where you think they should be, or messages sit on a transmission
queue for no reason…
68
It’s often:
o A break in communication between queue managers
o Check the PR->FR and FR->PR channels
o A config issue
o Check the host of the resource and that the queue manager has correctly joined the cluster
What to check:
o Configuration – double check the cluster names
o Queue manager cluster knowledge – are the queue managers known to the FRs?
o Queue manager channels – check the PR->FR and FR->PR channels
o Queue manager error logs – look for messages about expiring objects
o System queues - …
Sequence of diagnosis steps:
o Check the queue manager where the cluster definitions live
o Check all the full repositories
o Check the queue manager where you’re sending the messages from
© 2019 IBM Corporation
Know your system cluster queues
69
SYSTEM.CLUSTER.COMMAND.QUEUE
o Source of all work for the cluster repository process
o Messages arrive here from local configuration
commands and from FR->PR and PR->FR
o If you delete messages from here you may need a
cluster refresh
o Steady state: empty
SYSTEM.CLUSTER.REPOSITORY.QUEUE
o Persistent store for accumulated cluster knowledge
o All cluster object changes persisted here
o Based on local configuration and remote knowledge
o If you delete messages from here your only option is a
cluster refresh
o Steady state: non-empty
SYSTEM.CLUSTER.TRANSMIT.*
o Used to transfer all messages within a cluster
o Shared between user messages and cluster control
messages
o If you delete messages from here you may need a
cluster refresh, and you may have lost your own
messages!
o Steady state: Ideally empty (may contain messages
whilst channels are down)
SYSTEM.CLUSTER.HISTORY.QUEUE
o Intended for capturing diagnostics for IBM MQ Service
teams
o Only used if REFRESH CLUSTER is issued
o Entire contents of local cluster cache written prior to
processing REFRESH
o Messages expire after 180 days
o Steady state: ignore it
Thank You
David Ware
Chief Architect, IBM MQ
dware@uk.ibm.com
www.linkedin.com/in/dware1

Weitere ähnliche Inhalte

Was ist angesagt?

IBM MQ Online Tutorials
IBM MQ Online TutorialsIBM MQ Online Tutorials
IBM MQ Online TutorialsBigClasses.com
 
IBM MQ: An Introduction to Using and Developing with MQ Publish/Subscribe
IBM MQ: An Introduction to Using and Developing with MQ Publish/SubscribeIBM MQ: An Introduction to Using and Developing with MQ Publish/Subscribe
IBM MQ: An Introduction to Using and Developing with MQ Publish/SubscribeDavid Ware
 
IBM MQ - High Availability and Disaster Recovery
IBM MQ - High Availability and Disaster RecoveryIBM MQ - High Availability and Disaster Recovery
IBM MQ - High Availability and Disaster RecoveryMarkTaylorIBM
 
IBM Websphere MQ Basic
IBM Websphere MQ BasicIBM Websphere MQ Basic
IBM Websphere MQ BasicPRASAD BHATKAR
 
What's new with MQ on z/OS 9.3 and 9.3.1
What's new with MQ on z/OS 9.3 and 9.3.1What's new with MQ on z/OS 9.3 and 9.3.1
What's new with MQ on z/OS 9.3 and 9.3.1Matt Leming
 
IBM MQ and Kafka, what is the difference?
IBM MQ and Kafka, what is the difference?IBM MQ and Kafka, what is the difference?
IBM MQ and Kafka, what is the difference?David Ware
 
IBM Cloud Integration Platform High Availability - Integration Tech Conference
IBM Cloud Integration Platform High Availability - Integration Tech ConferenceIBM Cloud Integration Platform High Availability - Integration Tech Conference
IBM Cloud Integration Platform High Availability - Integration Tech ConferenceRobert Nicholson
 
IBM MQ High Availabillity and Disaster Recovery (2017 version)
IBM MQ High Availabillity and Disaster Recovery (2017 version)IBM MQ High Availabillity and Disaster Recovery (2017 version)
IBM MQ High Availabillity and Disaster Recovery (2017 version)MarkTaylorIBM
 
IBM Think 2018: IBM MQ High Availability
IBM Think 2018: IBM MQ High AvailabilityIBM Think 2018: IBM MQ High Availability
IBM Think 2018: IBM MQ High AvailabilityJamie Squibb
 
IBM MQ - better application performance
IBM MQ - better application performanceIBM MQ - better application performance
IBM MQ - better application performanceMarkTaylorIBM
 
Websphere MQ (MQSeries) fundamentals
Websphere MQ (MQSeries) fundamentalsWebsphere MQ (MQSeries) fundamentals
Websphere MQ (MQSeries) fundamentalsBiju Nair
 
IBM MQ in Containers - Think 2018
IBM MQ in Containers - Think 2018IBM MQ in Containers - Think 2018
IBM MQ in Containers - Think 2018Robert Parker
 
Designing IBM MQ deployments for the cloud generation
Designing IBM MQ deployments for the cloud generationDesigning IBM MQ deployments for the cloud generation
Designing IBM MQ deployments for the cloud generationDavid Ware
 
An Introduction to the Message Queuing Technology & IBM WebSphere MQ
An Introduction to the Message Queuing Technology & IBM WebSphere MQAn Introduction to the Message Queuing Technology & IBM WebSphere MQ
An Introduction to the Message Queuing Technology & IBM WebSphere MQRavi Yogesh
 
IBM MQ Whats new - up to 9.3.4.pdf
IBM MQ Whats new - up to 9.3.4.pdfIBM MQ Whats new - up to 9.3.4.pdf
IBM MQ Whats new - up to 9.3.4.pdfRobert Parker
 
Mq presentation
Mq presentationMq presentation
Mq presentationxddu
 
531: Controlling access to your IBM MQ system
531: Controlling access to your IBM MQ system531: Controlling access to your IBM MQ system
531: Controlling access to your IBM MQ systemRobert Parker
 
Websphere MQ admin guide
Websphere MQ admin guideWebsphere MQ admin guide
Websphere MQ admin guideRam Babu
 

Was ist angesagt? (20)

IBM MQ Online Tutorials
IBM MQ Online TutorialsIBM MQ Online Tutorials
IBM MQ Online Tutorials
 
IBM MQ: An Introduction to Using and Developing with MQ Publish/Subscribe
IBM MQ: An Introduction to Using and Developing with MQ Publish/SubscribeIBM MQ: An Introduction to Using and Developing with MQ Publish/Subscribe
IBM MQ: An Introduction to Using and Developing with MQ Publish/Subscribe
 
IBM MQ - High Availability and Disaster Recovery
IBM MQ - High Availability and Disaster RecoveryIBM MQ - High Availability and Disaster Recovery
IBM MQ - High Availability and Disaster Recovery
 
IBM Websphere MQ Basic
IBM Websphere MQ BasicIBM Websphere MQ Basic
IBM Websphere MQ Basic
 
What's new with MQ on z/OS 9.3 and 9.3.1
What's new with MQ on z/OS 9.3 and 9.3.1What's new with MQ on z/OS 9.3 and 9.3.1
What's new with MQ on z/OS 9.3 and 9.3.1
 
IBM MQ and Kafka, what is the difference?
IBM MQ and Kafka, what is the difference?IBM MQ and Kafka, what is the difference?
IBM MQ and Kafka, what is the difference?
 
IBM Cloud Integration Platform High Availability - Integration Tech Conference
IBM Cloud Integration Platform High Availability - Integration Tech ConferenceIBM Cloud Integration Platform High Availability - Integration Tech Conference
IBM Cloud Integration Platform High Availability - Integration Tech Conference
 
IBM MQ High Availabillity and Disaster Recovery (2017 version)
IBM MQ High Availabillity and Disaster Recovery (2017 version)IBM MQ High Availabillity and Disaster Recovery (2017 version)
IBM MQ High Availabillity and Disaster Recovery (2017 version)
 
IBM Think 2018: IBM MQ High Availability
IBM Think 2018: IBM MQ High AvailabilityIBM Think 2018: IBM MQ High Availability
IBM Think 2018: IBM MQ High Availability
 
IBM MQ - better application performance
IBM MQ - better application performanceIBM MQ - better application performance
IBM MQ - better application performance
 
Websphere MQ (MQSeries) fundamentals
Websphere MQ (MQSeries) fundamentalsWebsphere MQ (MQSeries) fundamentals
Websphere MQ (MQSeries) fundamentals
 
IBM MQ in Containers - Think 2018
IBM MQ in Containers - Think 2018IBM MQ in Containers - Think 2018
IBM MQ in Containers - Think 2018
 
WebSphere MQ tutorial
WebSphere MQ tutorialWebSphere MQ tutorial
WebSphere MQ tutorial
 
Designing IBM MQ deployments for the cloud generation
Designing IBM MQ deployments for the cloud generationDesigning IBM MQ deployments for the cloud generation
Designing IBM MQ deployments for the cloud generation
 
An Introduction to the Message Queuing Technology & IBM WebSphere MQ
An Introduction to the Message Queuing Technology & IBM WebSphere MQAn Introduction to the Message Queuing Technology & IBM WebSphere MQ
An Introduction to the Message Queuing Technology & IBM WebSphere MQ
 
WebSphere MQ introduction
WebSphere MQ introductionWebSphere MQ introduction
WebSphere MQ introduction
 
IBM MQ Whats new - up to 9.3.4.pdf
IBM MQ Whats new - up to 9.3.4.pdfIBM MQ Whats new - up to 9.3.4.pdf
IBM MQ Whats new - up to 9.3.4.pdf
 
Mq presentation
Mq presentationMq presentation
Mq presentation
 
531: Controlling access to your IBM MQ system
531: Controlling access to your IBM MQ system531: Controlling access to your IBM MQ system
531: Controlling access to your IBM MQ system
 
Websphere MQ admin guide
Websphere MQ admin guideWebsphere MQ admin guide
Websphere MQ admin guide
 

Ähnlich wie IBM MQ 2019: Build a Fault Tolerant and Scalable Solution

IBM Managing Workload Scalability with MQ Clusters
IBM Managing Workload Scalability with MQ ClustersIBM Managing Workload Scalability with MQ Clusters
IBM Managing Workload Scalability with MQ ClustersIBM Systems UKI
 
Building a resilient and scalable solution with IBM MQ on z/OS
Building a resilient and scalable solution with IBM MQ on z/OSBuilding a resilient and scalable solution with IBM MQ on z/OS
Building a resilient and scalable solution with IBM MQ on z/OSMatt Leming
 
IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters
IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ ClustersIBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters
IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ ClustersDavid Ware
 
20200113 - IBM Cloud Côte d'Azur - DeepDive Kubernetes
20200113 - IBM Cloud Côte d'Azur - DeepDive Kubernetes20200113 - IBM Cloud Côte d'Azur - DeepDive Kubernetes
20200113 - IBM Cloud Côte d'Azur - DeepDive KubernetesIBM France Lab
 
MQ Guide France - What's new in ibm mq 9.1.4
MQ Guide France - What's new in ibm mq 9.1.4MQ Guide France - What's new in ibm mq 9.1.4
MQ Guide France - What's new in ibm mq 9.1.4Robert Parker
 
What's New In MQ 9.2 on z/OS
What's New In MQ 9.2 on z/OSWhat's New In MQ 9.2 on z/OS
What's New In MQ 9.2 on z/OSMatt Leming
 
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...Peter Broadhurst
 
IBM IMPACT 2014 AMC-1866 Introduction to IBM Messaging Capabilities
IBM IMPACT 2014 AMC-1866 Introduction to IBM Messaging CapabilitiesIBM IMPACT 2014 AMC-1866 Introduction to IBM Messaging Capabilities
IBM IMPACT 2014 AMC-1866 Introduction to IBM Messaging CapabilitiesPeter Broadhurst
 
Where is my MQ message on z/OS?
Where is my MQ message on z/OS?Where is my MQ message on z/OS?
Where is my MQ message on z/OS?Matt Leming
 
Connecting Applications Everywhere with ActiveMQ
Connecting Applications Everywhere with ActiveMQConnecting Applications Everywhere with ActiveMQ
Connecting Applications Everywhere with ActiveMQRob Davies
 
Introduction to NServiceBus
Introduction to NServiceBusIntroduction to NServiceBus
Introduction to NServiceBusAdam Fyles
 
Building highly available architectures with WAS and MQ
Building highly available architectures with WAS and MQBuilding highly available architectures with WAS and MQ
Building highly available architectures with WAS and MQMatthew White
 
Mule soft mcia-level-1 Dumps
Mule soft mcia-level-1 DumpsMule soft mcia-level-1 Dumps
Mule soft mcia-level-1 DumpsArmstrongsmith
 
Deep Automation and ML-Driven Analytics for Application Services
Deep Automation and ML-Driven Analytics for Application ServicesDeep Automation and ML-Driven Analytics for Application Services
Deep Automation and ML-Driven Analytics for Application ServicesAvi Networks
 
Let’s Make Your CFO Happy; A Practical Guide for Kafka Cost Reduction with El...
Let’s Make Your CFO Happy; A Practical Guide for Kafka Cost Reduction with El...Let’s Make Your CFO Happy; A Practical Guide for Kafka Cost Reduction with El...
Let’s Make Your CFO Happy; A Practical Guide for Kafka Cost Reduction with El...HostedbyConfluent
 
Cloud-based performance testing
Cloud-based performance testingCloud-based performance testing
Cloud-based performance testingabhinavm
 
Breaking the Monolith road to containers.pdf
Breaking the Monolith road to containers.pdfBreaking the Monolith road to containers.pdf
Breaking the Monolith road to containers.pdfAmazon Web Services
 
So you want to provision a test environment...
So you want to provision a test environment...So you want to provision a test environment...
So you want to provision a test environment...DevOps.com
 
IBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
IBM Integration Bus & WebSphere MQ - High Availability & Disaster RecoveryIBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
IBM Integration Bus & WebSphere MQ - High Availability & Disaster RecoveryRob Convery
 
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...Prolifics
 

Ähnlich wie IBM MQ 2019: Build a Fault Tolerant and Scalable Solution (20)

IBM Managing Workload Scalability with MQ Clusters
IBM Managing Workload Scalability with MQ ClustersIBM Managing Workload Scalability with MQ Clusters
IBM Managing Workload Scalability with MQ Clusters
 
Building a resilient and scalable solution with IBM MQ on z/OS
Building a resilient and scalable solution with IBM MQ on z/OSBuilding a resilient and scalable solution with IBM MQ on z/OS
Building a resilient and scalable solution with IBM MQ on z/OS
 
IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters
IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ ClustersIBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters
IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters
 
20200113 - IBM Cloud Côte d'Azur - DeepDive Kubernetes
20200113 - IBM Cloud Côte d'Azur - DeepDive Kubernetes20200113 - IBM Cloud Côte d'Azur - DeepDive Kubernetes
20200113 - IBM Cloud Côte d'Azur - DeepDive Kubernetes
 
MQ Guide France - What's new in ibm mq 9.1.4
MQ Guide France - What's new in ibm mq 9.1.4MQ Guide France - What's new in ibm mq 9.1.4
MQ Guide France - What's new in ibm mq 9.1.4
 
What's New In MQ 9.2 on z/OS
What's New In MQ 9.2 on z/OSWhat's New In MQ 9.2 on z/OS
What's New In MQ 9.2 on z/OS
 
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
 
IBM IMPACT 2014 AMC-1866 Introduction to IBM Messaging Capabilities
IBM IMPACT 2014 AMC-1866 Introduction to IBM Messaging CapabilitiesIBM IMPACT 2014 AMC-1866 Introduction to IBM Messaging Capabilities
IBM IMPACT 2014 AMC-1866 Introduction to IBM Messaging Capabilities
 
Where is my MQ message on z/OS?
Where is my MQ message on z/OS?Where is my MQ message on z/OS?
Where is my MQ message on z/OS?
 
Connecting Applications Everywhere with ActiveMQ
Connecting Applications Everywhere with ActiveMQConnecting Applications Everywhere with ActiveMQ
Connecting Applications Everywhere with ActiveMQ
 
Introduction to NServiceBus
Introduction to NServiceBusIntroduction to NServiceBus
Introduction to NServiceBus
 
Building highly available architectures with WAS and MQ
Building highly available architectures with WAS and MQBuilding highly available architectures with WAS and MQ
Building highly available architectures with WAS and MQ
 
Mule soft mcia-level-1 Dumps
Mule soft mcia-level-1 DumpsMule soft mcia-level-1 Dumps
Mule soft mcia-level-1 Dumps
 
Deep Automation and ML-Driven Analytics for Application Services
Deep Automation and ML-Driven Analytics for Application ServicesDeep Automation and ML-Driven Analytics for Application Services
Deep Automation and ML-Driven Analytics for Application Services
 
Let’s Make Your CFO Happy; A Practical Guide for Kafka Cost Reduction with El...
Let’s Make Your CFO Happy; A Practical Guide for Kafka Cost Reduction with El...Let’s Make Your CFO Happy; A Practical Guide for Kafka Cost Reduction with El...
Let’s Make Your CFO Happy; A Practical Guide for Kafka Cost Reduction with El...
 
Cloud-based performance testing
Cloud-based performance testingCloud-based performance testing
Cloud-based performance testing
 
Breaking the Monolith road to containers.pdf
Breaking the Monolith road to containers.pdfBreaking the Monolith road to containers.pdf
Breaking the Monolith road to containers.pdf
 
So you want to provision a test environment...
So you want to provision a test environment...So you want to provision a test environment...
So you want to provision a test environment...
 
IBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
IBM Integration Bus & WebSphere MQ - High Availability & Disaster RecoveryIBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
IBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
 
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...
 

Mehr von David Ware

IBM MQ What's new - Sept 2022
IBM MQ What's new - Sept 2022IBM MQ What's new - Sept 2022
IBM MQ What's new - Sept 2022David Ware
 
IBM MQ Update, including 9.1.2 CD
IBM MQ Update, including 9.1.2 CDIBM MQ Update, including 9.1.2 CD
IBM MQ Update, including 9.1.2 CDDavid Ware
 
Whats new in MQ V9.1
Whats new in MQ V9.1Whats new in MQ V9.1
Whats new in MQ V9.1David Ware
 
What's new in IBM MQ, March 2018
What's new in IBM MQ, March 2018What's new in IBM MQ, March 2018
What's new in IBM MQ, March 2018David Ware
 
Whats new in IBM MQ; V9 LTS, V9.0.1 CD and V9.0.2 CD
Whats new in IBM MQ; V9 LTS, V9.0.1 CD and V9.0.2 CDWhats new in IBM MQ; V9 LTS, V9.0.1 CD and V9.0.2 CD
Whats new in IBM MQ; V9 LTS, V9.0.1 CD and V9.0.2 CDDavid Ware
 
InterConnect 2016: IBM MQ self-service and as-a-service
InterConnect 2016: IBM MQ self-service and as-a-serviceInterConnect 2016: IBM MQ self-service and as-a-service
InterConnect 2016: IBM MQ self-service and as-a-serviceDavid Ware
 
InterConnect 2016: What's new in IBM MQ
InterConnect 2016: What's new in IBM MQInterConnect 2016: What's new in IBM MQ
InterConnect 2016: What's new in IBM MQDavid Ware
 
IBM MQ: Using Publish/Subscribe in an MQ Network
IBM MQ: Using Publish/Subscribe in an MQ NetworkIBM MQ: Using Publish/Subscribe in an MQ Network
IBM MQ: Using Publish/Subscribe in an MQ NetworkDavid Ware
 
IBM WebSphere MQ: Using Publish/Subscribe in an MQ Network
IBM WebSphere MQ: Using Publish/Subscribe in an MQ NetworkIBM WebSphere MQ: Using Publish/Subscribe in an MQ Network
IBM WebSphere MQ: Using Publish/Subscribe in an MQ NetworkDavid Ware
 

Mehr von David Ware (9)

IBM MQ What's new - Sept 2022
IBM MQ What's new - Sept 2022IBM MQ What's new - Sept 2022
IBM MQ What's new - Sept 2022
 
IBM MQ Update, including 9.1.2 CD
IBM MQ Update, including 9.1.2 CDIBM MQ Update, including 9.1.2 CD
IBM MQ Update, including 9.1.2 CD
 
Whats new in MQ V9.1
Whats new in MQ V9.1Whats new in MQ V9.1
Whats new in MQ V9.1
 
What's new in IBM MQ, March 2018
What's new in IBM MQ, March 2018What's new in IBM MQ, March 2018
What's new in IBM MQ, March 2018
 
Whats new in IBM MQ; V9 LTS, V9.0.1 CD and V9.0.2 CD
Whats new in IBM MQ; V9 LTS, V9.0.1 CD and V9.0.2 CDWhats new in IBM MQ; V9 LTS, V9.0.1 CD and V9.0.2 CD
Whats new in IBM MQ; V9 LTS, V9.0.1 CD and V9.0.2 CD
 
InterConnect 2016: IBM MQ self-service and as-a-service
InterConnect 2016: IBM MQ self-service and as-a-serviceInterConnect 2016: IBM MQ self-service and as-a-service
InterConnect 2016: IBM MQ self-service and as-a-service
 
InterConnect 2016: What's new in IBM MQ
InterConnect 2016: What's new in IBM MQInterConnect 2016: What's new in IBM MQ
InterConnect 2016: What's new in IBM MQ
 
IBM MQ: Using Publish/Subscribe in an MQ Network
IBM MQ: Using Publish/Subscribe in an MQ NetworkIBM MQ: Using Publish/Subscribe in an MQ Network
IBM MQ: Using Publish/Subscribe in an MQ Network
 
IBM WebSphere MQ: Using Publish/Subscribe in an MQ Network
IBM WebSphere MQ: Using Publish/Subscribe in an MQ NetworkIBM WebSphere MQ: Using Publish/Subscribe in an MQ Network
IBM WebSphere MQ: Using Publish/Subscribe in an MQ Network
 

Kürzlich hochgeladen

MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based projectAnoyGreter
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)jennyeacort
 
SpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at RuntimeSpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at Runtimeandrehoraa
 
What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...Technogeeks
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureDinusha Kumarasiri
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEOrtus Solutions, Corp
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Matt Ray
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesŁukasz Chruściel
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEEVICTOR MAESTRE RAMIREZ
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWave PLM
 
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...confluent
 
Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Mater
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanyChristoph Pohl
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceBrainSell Technologies
 
What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....kzayra69
 
A healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfA healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfMarharyta Nedzelska
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaHanief Utama
 
Cyber security and its impact on E commerce
Cyber security and its impact on E commerceCyber security and its impact on E commerce
Cyber security and its impact on E commercemanigoyal112
 

Kürzlich hochgeladen (20)

MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based project
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
 
SpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at RuntimeSpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at Runtime
 
What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with Azure
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
 
Advantages of Odoo ERP 17 for Your Business
Advantages of Odoo ERP 17 for Your BusinessAdvantages of Odoo ERP 17 for Your Business
Advantages of Odoo ERP 17 for Your Business
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New Features
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEE
 
2.pdf Ejercicios de programación competitiva
2.pdf Ejercicios de programación competitiva2.pdf Ejercicios de programación competitiva
2.pdf Ejercicios de programación competitiva
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need It
 
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
 
Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. Salesforce
 
What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....
 
A healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfA healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdf
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief Utama
 
Cyber security and its impact on E commerce
Cyber security and its impact on E commerceCyber security and its impact on E commerce
Cyber security and its impact on E commerce
 

IBM MQ 2019: Build a Fault Tolerant and Scalable Solution

  • 1. Integration Technical Conference 2019 Build a truly fault tolerant and scalable IBM MQ solution David Ware Chief Architect, IBM MQ
  • 2. © 2019 IBM Corporation Fault Toleration 2 Queue Manager Single Queue Manager Queue Manager Queue Manager Queue Manager Multiple 100% 0% 100% 0% availabilityavailability More is better for availability
  • 3. © 2019 IBM Corporation Queue Manager Queue Manager Queue Manager It’s not just the queue managers… 3 Client Client Client Step 1 Horizontally scale the application into multiple instances, all performing the same role A queue manager works better when there are multiple applications working in parallel Queue Manager Step 2 Horizontally scale the queue managers Create multiple queue managers with the ‘same’ configuration Distribute the application instances across the queue managers Client Client Client Client Client Client Queue Manager Queue Manager Queue Manager
  • 4. © 2019 IBM Corporation Scaling 4 Single Multiple Queue Manager Queue Manager Queue Manager Queue Manager Queue Manager Queue Manager Queue Manager Queue Manager Queue Manager Queue Manager x1x4 x1x2x3x4x5x6x7x8xn More is better for scalability
  • 5. © 2019 IBM Corporation 5 Let’s go through that availability thing one step at a time…
  • 6. © 2019 IBM Corporation Node Single, non-HA queue manager 6 App App App Queue Manager 100% 0% 100% 0% System availability Message availability While the QMgr is down all of the applications need to wait for it to be restarted All queued messages require the QMgr to restart How much messaging work can proceed Proportion of queued message that are available
  • 7. © 2019 IBM Corporation Node B Node CNode A Single HA queue manager 7 App App App Queue Manager Queue ManagerQueue Manager Highly available queue manager and queue instances 100% 0% 100% 0% HA queue managers are restarted quickly (typically a ‘few’ seconds) System availability Message availability Here we have one active instance of the queue manager with two replica, standby, instances ready to take over. (This happens to be the MQ RDQM HA model, other solutions like multi-instance queue managers are subtly different (only one standby) but essentially the same)
  • 8. © 2019 IBM Corporation Node B Node CNode A Multiple HA queue managers 8 Queue Manager 1 App App App AppAppApp App Queue Manager 1Queue Manager 1 Highly available queue manager and queue instances 1/3 of message traffic Queue Manager 2 Queue Manager 2Queue Manager 2 Highly available queue manager and queue instances 1/3 of message traffic Queue Manager 3Queue Manager 3Queue Manager 3 Highly available queue manager and queue instances 1/3 of message traffic AppApp To further increase the availability you need to remove the single point of failure that is a queue manager. For this, create multiple queue managers and stripe the messaging workload across them by defining the “same” queue on all of them. Each message is only queued on a single queue manager but the multiple queue managers mean any one outage is confined to a subset of the workload
  • 9. © 2019 IBM Corporation Node B Node CNode A Multiple HA queue managers – ordered consumption 9 Queue Manager 1 App App App AppAppApp App Queue Manager 1Queue Manager 1 Highly available queue manager and queue instances 1/3 of message traffic Queue Manager 2 Queue Manager 2Queue Manager 2 Highly available queue manager and queue instances 1/3 of message traffic Queue Manager 3Queue Manager 3Queue Manager 3 Highly available queue manager and queue instances 1/3 of message traffic AppApp Connect: QMgr1 Connect: QMgr2 Connect: QMgr3 100% 0% 100% 0% Messages that require ordering need to go to the same queue manager. One approach to achieve this is for application instances to connect to a specific queue manager based on their ordered stream of messages. This ensures messages are processed in order per- group While a QMgr is down a subset of application need to wait for it to restart System availability Message availability Queued messages are confined to a single queue manager. Therefore, the QMgr needs to restart to make those messages available. Hence the need to make each queue manager highly available
  • 10. © 2019 IBM Corporation Node B Node CNode A Multiple HA queue managers – unordered consumption 10 Queue Manager 1 App App App AppAppApp App Queue Manager 1Queue Manager 1 Highly available queue manager and queue instances 1/3 of message traffic Queue Manager 2 Queue Manager 2Queue Manager 2 Highly available queue manager and queue instances 1/3 of message traffic Queue Manager 3Queue Manager 3Queue Manager 3 Highly available queue manager and queue instances 1/3 of message traffic AppApp Connect: QMgrGroup 100% 0% 100% 0% Application instances can connect to any queue manager as the order that they are queued across the multiple instances is not a concern. Connecting across a group can be achieved with a CCDT queue manager groupThere is always a running QMgr for an application to connect to for new work Queued messages are still confined to a single queue manager. Therefore, the QMgr still needs to restart to make those particular messages available. System availability Message availability
  • 11. © 2019 IBM Corporation Single vs. Multiple 11 Single o Simple o Invisible to applications o Limited by maximum system size o Liable to hit internal limits o Not all aspects scale linearly o Restart times can grow o Every outage is high impact Multiple o Unlimited by system size o All aspects scale linearly o More suited to cloud scaling o Reduced restart times o Enables rolling upgrades o Tolerate partial failures o Visible to applications – limitations apply o Potentially more complicated Queue Manager Queue Manager Queue Manager Queue Manager Queue Manager
  • 12. © 2019 IBM Corporation uniform cluster… 12 Try to stop thinking about each individual queue manager and start thinking about them as a cluster Queue Manager Queue Manager Queue Manager
  • 13. © 2019 IBM Corporation 13 The fundamentals on MQ Clusters (skip this if you know it)
  • 14. © 2019 IBM Corporation MQ clustering What MQ Clusters provide : 14 Availability routing Horizontal scaling of queues Configuration directory Dynamic registration and lookup Dynamic channel management Dynamic message routing Foundation
  • 15. © 2019 IBM Corporation cluster FR directoryQMGRFR QMGR QMGR QMGR QMGR QMGR Cluster management 15 Full repositories learn everything about the cluster. These are the “directory servers” The other queue managers only learn about what they need. These are ”partial repositories”
  • 16. © 2019 IBM Corporation cluster FR directoryQMGRFR QMGR QMGR QMGR QMGR QMGR App ? Queue managers persistently cache their knowledge of the cluster resources, limiting interactions with the full repositories Cluster management 16 Full repositories will pass on the details of cluster queues and the connection details of the queue manager they are located on Details of a clustered queue or topic are sent to the full repositories in the cluster
  • 17. © 2019 IBM Corporation cluster FR directoryQMGRFR QMGR QMGR QMGR QMGR QMGR App Unused cluster information times out and is removed from the local cache Cluster management 17
  • 18. © 2019 IBM Corporation cluster FR directoryQMGRFR QMGR QMGR QMGR QMGR QMGR If a queue manager is simply switched off, rather than exiting the cluster in a controlled manner, it will eventually be cleaned from every queue manager’s cache Cluster management 18
  • 19. © 2019 IBM Corporation 19 How to set up a cluster (skip this too if you know it)
  • 20. © 2019 IBM Corporation FR1 Step 1: Create your two full repositories ALTER QMGR REPOS(‘CLUS1’) DEFINE CHANNEL(‘CLUS1.FR1’) CHLTYPE(CLUSRCVR) CLUSTER(‘CLUS1’) CONNAME(FR1 location) DEFINE CHANNEL(‘CLUS1.FR2’) CHLTYPE(CLUSSDR) CLUSTER(‘CLUS1’) CONNAME(FR2 location) FR2 ALTER QMGR REPOS(‘CLUS1’) DEFINE CHANNEL(‘CLUS1.FR2’) CHLTYPE(CLUSRCVR) CLUSTER(‘CLUS1’) CONNAME(FR2 location) DEFINE CHANNEL(‘CLUS1.FR1’) CHLTYPE(CLUSSDR) CLUSTER(‘CLUS1’) CONNAME(FR1 location) FR2 FR1 20
  • 21. © 2019 IBM Corporation FR1 FR2 QMGR1 DEFINE CHANNEL(‘CLUS1.QMGR1’) CHLTYPE(CLUSRCVR) CLUSTER(‘CLUS1’) CONNAME(QMGR1 location) DEFINE CHANNEL(‘CLUS1.FR1’) CHLTYPE(CLUSSDR) CLUSTER(‘CLUS1’) CONNAME(FR1 location) DEFINE QLOCAL(Q1) CLUSTER(CLUS1) Q1 FR2 QMGR1 FR1 QMGR1 Q1@QMGR1 FR1 FR2 Step 2: Add in more queue managers DISPLAY CLUSQMGR(*) DISPLAY QCLUSTER(*)Q1@QMGR1 21
  • 22. © 2019 IBM Corporation FR1 FR2 QMGR1 DEFINE CHANNEL(‘CLUS1.QMGR1’) CHLTYPE(CLUSRCVR) CLUSTER(‘CLUS1’) CONNAME(QMGR1 location) DEFINE CHANNEL(‘CLUS1.FR1’) CHLTYPE(CLUSSDR) CLUSTER(‘CLUS1’) CONNAME(FR1 location) DEFINE QLOCAL(Q1) CLUSTER(CLUS1) Q1 QMGR2 DEFINE CHANNEL(‘CLUS1.QMGR2’) CHLTYPE(CLUSRCVR) CLUSTER(‘CLUS1’) CONNAME(QMGR2 location) DEFINE CHANNEL(‘CLUS1.FR1’) CHLTYPE(CLUSSDR) CLUSTER(‘CLUS1’) CONNAME(FR1 location) FR2 QMGR1 QMGR2 FR1 QMGR1 QMGR2 Q1@QMGR1 Q1@QMGR1 FR1 FR2 FR1 FR2 Step 2: Add in more queue managers 22
  • 23. © 2019 IBM Corporation FR1 FR2 QMGR1 Q1 QMGR2 FR2 QMGR1 QMGR2 FR1 QMGR1 QMGR2 Q1@QMGR1 Q1@QMGR1 Client 1 MQOPEN(Q1) Q1? FR1 FR2 QMGR1 FR1 FR2 Q1? Q1@QMGR1 Q1@QMGR1 Step 3: Start sending messages 23
  • 24. © 2019 IBM Corporation FR1 FR2 QMGR1 QMGR2 So all you needed… § Two full repository queue managers § A cluster receiver channel each § A single cluster sender each § No need to manage pairs of channels between each queue manager combination or their transmission queues § No need for remote queue definitions 24
  • 25. © 2019 IBM Corporation Horizontal scaling with MQ Clustering
  • 26. © 2019 IBM Corporation Horizontal scaling with MQ Clustering 26 Queue Manager Queue Manager Queue Manager Queue Manager Earlier we showed how to scale applications directly across multiple queue managers, with an MQ Cluster you can do that with queue manager-to- queue manager message traffic. A queue manager will typically route messages based on the name of the target queue In an MQ Cluster it is possible for multiple queue managers to independently define the same named queue Any queue manager that needs to route messages to that queue now has a choice… ?
  • 27. © 2019 IBM Corporation Channel workload balancing 27 App 1App 1Client Queue Manager Queue Manager Queue Manager Queue Manager • Cluster workload balancing applies when there are multiple cluster queues of the same name • Cluster workload balancing will be applied in one of three ways: • When the putting application opens the queue - bind on open • When a message group is started - bind on group • When a message is put to the queue - bind not fixed • When workload balancing is applied: • The source queue manager builds a list of all potential targets based on the queue name • Eliminates the impossible options • Prioritises the remainder • If more than one come out equal, workload balancing ensues … • Balancing is based on: • The channel – not the target queue • Channel traffic to all queues is taken into account • Weightings can be applied to the channel • … this is used to send the messages to the chosen target
  • 28. © 2019 IBM Corporation • Cluster workload balancing applies when there are multiple cluster queues of the same name • Cluster workload balancing will be applied in one of three ways: • When the putting application opens the queue - bind on open • When a message group is started - bind on group • When a message is put to the queue - bind not fixed • When workload balancing is applied: • The source queue manager builds a list of all potential targets based on the queue name • Eliminates the impossible options • Prioritises the remainder • If more than one come out equal, workload balancing ensues … • Balancing is based on: • The channel – not the target queue • Channel traffic to all queues is taken into account • Weightings can be applied to the channel • … this is used to send the messages to the chosen target Channel workload balancing 28 Client Queue Manager Queue Manager Queue Manager Queue Manager Client
  • 29. © 2019 IBM Corporation • Cluster workload balancing applies when there are multiple cluster queues of the same name • Cluster workload balancing will be applied in one of three ways: • When the putting application opens the queue - bind on open • When a message group is started - bind on group • When a message is put to the queue - bind not fixed • When workload balancing is applied: • The source queue manager builds a list of all potential targets based on the queue name • Eliminates the impossible options • Prioritises the remainder • If more than one come out equal, workload balancing ensues … • Balancing is based on: • The channel – not the target queue • Channel traffic to all queues is taken into account • Weightings can be applied to the channel • … this is used to send the messages to the chosen target Channel workload balancing 29 App 1App 1Client Queue Manager Queue Manager Queue Manager Queue Manager
  • 30. © 2019 IBM Corporation • Cluster workload balancing applies when there are multiple cluster queues of the same name • Cluster workload balancing will be applied in one of three ways: • When the putting application opens the queue - bind on open • When a message group is started - bind on group • When a message is put to the queue - bind not fixed • When workload balancing is applied: • The source queue manager builds a list of all potential targets based on the queue name • Eliminates the impossible options • Prioritises the remainder • If more than one come out equal, workload balancing ensues … • Balancing is based on: • The channel – not the target queue • Channel traffic to all queues is taken into account • Weightings can be applied to the channel • … this is used to send the messages to the chosen target Channel workload balancing 30 App 1App 1Client Queue Manager Queue Manager Queue Manager Queue Manager Tip: By default, a matching queue on the same queue manager that the application is connected to will be prioritized over all others for speed. To overcome that, look at CLWLUSEQ
  • 31. © 2019 IBM Corporation Horizontal scaling – do I really need MQ Clustering? 31 App 1App 1Client Service Scaled out applications App 1App 1Client Service App 1App 1Client Service Queue Manager Queue Manager Queue Manager Q. Is clustering required? A. Maybe, maybe not … Service Single producing application Client ServiceService Queue Manager Queue Manager Queue Manager Q. Is clustering required? A. Definitely Q. Is clustering required? A. Definitely ServiceService ¸ App 1App 1Client Gateway routing Service Queue Manager Queue Manager Queue Manager Queue Manager
  • 32. © 2019 IBM Corporation Back to the uniform cluster
  • 33. © 2019 IBM Corporation Building scalable, fault tolerant, solutions Many of you have built your own continuously available and horizontally scalable solutions over the years Let’s call this the “uniform cluster” pattern MQ has provided you many of the building blocks - Client auto-reconnect CCDT queue manager groups But you’re left to solve some of the problems, particularly with long running applications - Efficiently distributing your applications Ensuring all messages are processed Maintaining availability during maintenance Handling growth and contraction of scale App App App decoupled AppApp 33
  • 34. © 2019 IBM Corporation Uniform Cluster 34 MQ 9.1.2 started to make that easier For the distributed platforms, declare a set of matching queue managers to be following the uniform cluster pattern All members of an MQ Cluster Matching queues are defined on every queue manager Applications can connect as clients to every queue manager MQ will automatically share application connectivity knowledge between queue managers The group will use this knowledge to automatically keep matching application instances balanced across the queue managers Matching applications are based on application name (new abilities to programmatically define this) MQ 9.1.2 is started to roll out the client support for this IBM MQ 9.1.2 CD Application awareness https://developer.ibm.com/messaging/2019/03/21/building-scalable-fault-tolerant-ibm-mq-systems/
  • 35. © 2019 IBM Corporation Application awareness 35 App App Automatic Application balancing Application instances can initially connect to any member of the group We recommend you use a queue manager group and CCDT to remove any SPoF Every member of the uniform cluster will detect an imbalance and request other queue managers to donate their applications Hosting queue managers will instigate a client auto- reconnect with instructions of where to reconnect to Applications that have enabled auto-reconnect will automatically move their connection to the indicated queue manager Client support has been increased over subsequent CD releases. 9.1.2 CD started with support for C-based applications, 9.1.3 CD added JMS … App App App App IBM MQ 9.1.2 CD https://developer.ibm.com/messaging/2019/03/21/building-scalable-fault-tolerant-ibm-mq-systems/
  • 36. © 2019 IBM Corporation36 App App Automatic Application balancing Automatically handle rebalancing following planned and unplanned queue manager outages Existing client auto-reconnect and CCDT queue manager groups will enable initial re-connection on failure Uniform Cluster rebalancing will enable automatic rebalancing on recovery App App App App IBM MQ 9.1.2 CD https://developer.ibm.com/messaging/2019/03/21/building-scalable-fault-tolerant-ibm-mq-systems/
  • 37. © 2019 IBM Corporation Automatic Application balancing 37 App App App App App App IBM MQ 9.1.2 CD Even to horizontally scale out a queue manager deployment Simply add a new queue manager to the uniform cluster The new queue manager will detect an imbalance of applications and request its fair share https://developer.ibm.com/messaging/2019/03/21/building-scalable-fault-tolerant-ibm-mq-systems/
  • 38. © 2019 IBM Corporation Uniform Cluster features 38 IBM MQ 9.1.2 CD As well as the automatic rebalancing of the C library based clients, MQ 9.1.2 CD introduced a number of new or improved features for the distributed platforms that tie together to make all this possible This means you need both the queue managers and the clients to be the latest MQ version Creation of a Uniform Cluster • A simple qm.ini tuning parameter for now The ability to identify applications by name, to define grouping of related applications for balancing • Extends the existing JMS capability to all languages Auto reconnectable applications • Only applications that connect with the auto-reconnect option are eligible for rebalancing Text based CCDTs to make it easier to configure this behaviour • And to allow duplicate channel names TuningParameters: UniformClusterName=CLUSTER1 $ export MQAPPLNAME=MY.SAMPLE.APP https://developer.ibm.com/messaging/2019/03/21/walkthrough-auto-application-rebalancing-using-the-uniform-cluster-pattern/ ... Channels: DefRecon=YES
  • 39. © 2019 IBM Corporation39 Balancing by application name Automatic application balancing is based on the application name alone Different groups of application instances with different application names are balanced independently By default the application name is the executable name This has been customisable with Java and JMS applications for a while MQ 9.1.2 CD clients have extended this to other programming languages For example C, .NET, XMS, … Application name can be set either programmatically or as an environment override App App App App App App IBM MQ 9.1.2 CD https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.1.0/com.ibm.mq.dev.doc/q132920_.htm App App App App App App
  • 40. © 2019 IBM Corporation Building scalable and available solutions JSON CCDT Build your own JSON format CCDTs Supports multiple channels of the same name on different queue managers to simplify the building of uniform clusters Available with all 9.1.2 clients C, JMS, .NET, Node.js, Golang clients © 2019 IBM Corporation 40 IBM MQ 9.1.2 CD 01100110100101 10001010101101 10101011011011 01001011110111 01110111101111 01110111011 { “channel”:[ { “name”:”ABC”, ”queueManager”:”A” }, { “name”:”ABC”, ”queueManager”:”B” }, ] }
  • 41. © 2019 IBM Corporation Configuring the CCDT for application balancing in a Uniform Cluster To correctly setup a CCDT for application rebalancing it needs to contain two entries per queue manager: • An entry under the name of a queue manager group • And entry under the queue manager’s real name (These previously would need to be different channels, but with the JSON CCDT this is unnecessary) The application connects using the queue manager group as the queue manager name (prefixed with an ‘*’) © 2019 IBM Corporation 41 IBM MQ 9.1.2 CD { "channel": [ { "name": ”SVRCONN.CHANNEL", "type": "clientConnection” "clientConnection": { "connection": [ { "host": ”host1", "port": 1414 } ], "queueManager": "ANY_QM” }, }, { "name": ”SVRCONN.CHANNEL", "type": "clientConnection” "clientConnection": { "connection": [ { "host": ”host2", "port": 1414 } ], "queueManager": "ANY_QM” }, }, … … { "name": ”SVRCONN.CHANNEL", "type": "clientConnection” "clientConnection": { "connection": [ { "host": ”host1", "port": 1414 } ], "queueManager": ”QMGR1” }, }, { "name": ”SVRCONN.CHANNEL", "type": "clientConnection” "clientConnection": { "connection": [ { "host": ”host2", "port": 1414 } ], "queueManager": ”QMGR2” }, } ] } QMGR1QMGR2
  • 42. © 2019 IBM Corporation Can I decouple any application?
  • 43. © 2019 IBM Corporation Does this work for all applications? This pattern of loosely coupled applications only works for certain applications styles. 43 - no Good Applications that can tolerate being moved from one queue manager to another without realising and can run with multiple instances • Datagram producers and consumers • Responders to requests, e.g. MDBs • No message ordering Bad Applications that create persistent state across multiple messaging operations, or require a single instance to be running • Requestors waiting for specific replies • Dependant on message ordering • Global transactions …
  • 44. © 2019 IBM Corporation A new hope for transactions 44 © 2019 IBM Corporation Global transactions require a single resource manager to be named when connecting. For MQ a resource manager is a queue manager. This prevents the use of queue manager groups in CCDTs However, WebSphere Liberty 18.0.0.2 and MQ 9.1.2 CD support the use of CCDT queue manager groups when connecting IBM MQ 9.1.2 CD App ConnectionFactory GROUP { “channel”:[ { “name”:”SVRCONN.QM1”, ”queueManager”:”GROUP” }, { “name”:”SVRCONN.QM2”, ”queueManager”:”GROUP” }, ] }
  • 45. © 2019 IBM Corporation Availability routing in an MQ Cluster
  • 46. © 2019 IBM Corporation Clustering for availability Is MQ Clustering a high availability solution? 46 NO YES – Having multiple potential targets for any message can improve the availability of the solution, always providing an option to process new messages. – A queue manager in a cluster has the ability to route new and old messages based on the availability of the channels, routing messages to running queue managers. – Clustering can be used to route messages to active consuming applications. Not for the message data. Each message is only available from a single queue manager Clustering can form a part of the overall high availability of the messaging system
  • 47. © 2019 IBM Corporation Channel availability routing 47 • When performing workload balancing, the availability of the channel to reach the target is a factor • All things being equal, messages will be routed to those targets with a working channel Things that can prevent routing • Applications targeting messages at a specific queue manage (e.g. reply message) • Using “cluster workload rank” • Binding messages to a target Routing of messages based on availability doesn’t just happen when they’re first put, it also occurs for queued transmission messages every time the channel is retried So blocked messages can be re-routed, if they’re not prevented… ServiceService App 1App 1Client Service Queue Manager Queue Manager Queue Manager Queue Manager
  • 48. © 2019 IBM Corporation Pros and cons of binding 48 Bind context: Duration of an open Duration of logical group • All messages put within the bind context will go to same target* • Message order can be preserved** • Workload balancing logic is only driven at the start of the context • Once a target has been chosen it cannot change • Whether it’s available or not • Even if all the messages could be redirected Bind on open Bind on group Bind context: None • Greater availability, a message will be redirected to an available target*** • Overhead of workload balancing logic for every message • Message order may be affected Bind not fixed Bind on open is the default It could be set on the cluster queue (don’t forget aliases) or in the app * While a route is known by the source queue manager, it won’t be rebalanced, but it could be DLQd ** Other aspects may affect ordering (e.g. deadletter queueing) *** Unless it’s fixed for another reason (e.g. specifying a target queue manager)
  • 49. © 2019 IBM Corporation Application availability routing
  • 50. © 2019 IBM Corporation App 1App 1Client 1 QMgr QMgr QMgr • Cluster workload balancing does not take into account the availability of receiving applications • Or a build up of messages on a queue Service 1 Service 1 Blissful ignorance This queue manager is unaware of the failure to one of the service instances Unserviced messages Half the messages will quickly start to build up on the service queue Application based routing 50
  • 51. © 2019 IBM Corporation Service 1 Service 1 QMgr QMgr QMgr App 1App 1Client 1 Application based routing 51
  • 52. © 2019 IBM Corporation App 1App 1Client 1 • MQ provides a sample monitoring service tool, amqsclm • It regularly checks for attached consuming applications (IPPROCS) • And automatically adjusts the cluster queue definitions to route messages intelligently (CLWLPRTY) • That information is automatically distributed around the cluster Service 1 Service 1 QMgr QMgr QMgr Moving messages Any messages that slipped through will be transferred to an active instance of the queue Detecting a change When a change to the open handles is detected the cluster workload balancing state is modifiedSending queue managers Newly sent messages will be sent to active instances of the queue Application based routing 52 FR
  • 53. © 2019 IBM Corporation Cluster Queue Monitoring Sample – amqsclm, is provided with MQ to ensure messages are directed towards the instances of clustered queues that have consuming applications currently attached. This allows all messages to be processed effectively even when a system is asymmetrical (i.e. consumers not attached everywhere). • In addition it will move already queued messages from instances of the queue where no consumers are attached to instances of the queue with consumers. This removes the chance of long term marooned messages when consuming applications disconnect. – The above allows for more versatility in the use of clustered queue topologies where applications are not under the direct control of the queue managers. It also gives a greater degree of high availability in the processing of messages. – The tool provides a monitoring executable to run against each queue manager in the cluster hosting queues, monitoring the queues and reacting accordingly. • The tool is provided as source (amqsclm.c sample) to allow the user to understand the mechanics of the tool and customise where needed. 53
  • 54. © 2019 IBM Corporation AMQSCLM Logic – Based on the existing MQ cluster workload balancing mechanics: • Uses cluster priority of individual queues – all else being equal, preferring to send messages to instances of queues with the highest cluster priority (CLWLPRTY). • Using CLWLPRTY always allows messages to be put to a queue instance, even when no consumers are attached to any instance. • Changes to a queue’s cluster configuration are automatically propagated to all queue managers in the cluster that are workload balancing messages to that queue. – Single executable, set to run against each queue manager with one or more cluster queues to be monitored. – The monitoring process polls the state of the queues on a defined interval: • If no consumers are attached: – CLWLPRTY of the queue is set to zero (if not already set). – The cluster is queried for any active (positive cluster priority) queues. – If they exist, any queued messages on this queue are got/put to the same queue. Cluster workload balancing will re-route the messages to the active instance(s) of the queue in the cluster. • If consumers are attached: – CLWLPRTY of the queue is set to one (if not already set). – Defining the tool as a queue manager service will ensure it is started with each queue manager 54
  • 55. © 2019 IBM Corporation Putting it all together
  • 56. © 2019 IBM Corporation Uniform Cluster 56 App App Bringing it all together • Build a matching set of queue managers, in the style of a uniform cluster • Make them highly available to prevent stuck messages • Consider adding amqsclm to handle a lack of consumers • Setup your CCDTs for decoupling applications from individual queue managers • Look at the 9.1.2+ application rebalancing capability and see if it matches your needs • Connect your applications App App App App amqsclm amqsclm amqsclm decoupled
  • 57. © 2019 IBM Corporation What about MQ Clusters in the cloud?
  • 58. © 2019 IBM Corporation MQ Clusters and Clouds Not quite… 58 “Cloud platforms provide all an MQ cluster can do” Clouds often provide cluster-like capability: Directory services and routing Network workload balancing Great for stateless workload balancing Can be good for balancing unrestricted clients across multiple But where state is involved, such as reliably sending messages from one queue manager to another without risking message loss or duplication, such routing isn’t enough That’s still the job of an MQ cluster…
  • 59. © 2019 IBM Corporation MQ Clusters and Clouds Yes, but think it through first… 59 “So will MQ clusters work in a cloud?” Some things may be different in your cloud: o A new expectation that queue managers will be created and deleted more dynamically than before o Queue managers will “move around” with the cloud o Your level of control may be relaxed
  • 60. © 2019 IBM Corporation Adding and removing queue managers 60 Adding queue managers o Have a simple clustering topology o Separate out your full repositories and manage those separately o Automate the joining of a new queue manager + Removing queue managers - this is harder! o You might have messages on it that you need, how are you going to remove those first? o If you just switch off the queue manager, it’ll still be known by the cluster for months! o Is that a problem? o If routing is based on availability, messages will be routed to alternative queue managers o Messages will sit on the cluster transmission queues while the queue manager is still known o It makes your view of the cluster messy o Automate the cluster removal o From the deleting queue manager o Stop and delete the cluster channels – give it a little time to work through to the FRs o Then from a full repository – as it’ll never be coming back o RESET the queue manager out of the cluster _
  • 61. © 2019 IBM Corporation Adding and removing queue managers Adding queue managers – Have a simple clustering topology – Separate out your full repositories and manage those separately (there’s no need to be adding and removing them dynamically) – Automate the joining of a new queue manager Removing queue managers – That’s harder! – You might have messages on it that you need, how are you going to remove those? – If you just switch off the queue manager, it’ll still be known by the cluster for months! – Is that a problem? • If routing is based on availability, messages will be routed to alternative queue managers – Unless there’s no other choice (e.g. replies being targeted to this queue manager, or no other queue of the same name in the cluster) • Messages will sit on the cluster transmission queues while the queue manager is still known – If it had been removed those messages would have failed, or be moved to a DLQ • Having defunct queue managers still known in the cluster might make your monitoring and problem determination harder – Automate the cluster removal • From the deleting queue manager – Stop and delete the cluster channels – give it a little time to work through to the FRs • Or from a full repository – as it’ll never be coming back – RESET the queue manager out of the cluster 61
  • 62. © 2019 IBM Corporation Queue managers will move around 62 o Usually this is not actually a problem, clouds are doing a good job of hiding that from you o e.g. Kubernetes “services” are effectively providing DNS for a container o Check the cloud capabilities available to you o Probably best to use hostnames in the connection details rather than IP addresses o Remember that often names/addresses may only be visible from within the cloud o External addresses may differ, comma separated CONNAMEs can help here
  • 63. © 2019 IBM Corporation Control over the cluster 63 Keep it simple In a potentially self service world, where new applications need their own queue manager, overall control of configuring everything in the cluster is not desirable. To prevent chaos don’t just pick up your bespoke, on-prem, cluster topology and put it in the cloud – re-think it: o Simplify the cluster topology, do you really need all those overlapping clusters? o Make sure the cluster infrastructure (the full repositories, gateway queue managers) are still under your control o Script the joining/leaving a cluster and automate this with the deployment of new queue managers o A cluster is a shared namespace, enforce naming conventions on cluster queues and topics o Use your full repositories as a view across the cluster
  • 64. © 2019 IBM Corporation Clustering good practices
  • 65. © 2019 IBM Corporation Full repositories 65 o Dedicate a pair of queue managers to being the full repositories for the cluster o Don’t even think of them as queue managers o A pair for redundancy obviously, but why not more? o More than two won’t work how you hope it will… o A queue manager will randomly pick two full repositories to work with (manually defining your sender channels doesn’t guarantee which two) o If those two are unavailable it won’t go off and find another one FR QMGRFR
  • 66. © 2019 IBM Corporation Check your defaults 66 Cluster transmission queues DEFCLXQ o MQ started with each queue manager having just the one o It added the ability to automatically split that out into one per cluster channel o This improves visibility of channel problems o And reduces the cross-channel impacts of a blocked channel Cluster workload rank and priority CLWLRANK/CLWLPRTY o These are used as a tie breaker across multiple queues/channels to determine where messages are sent o They default to zero (the lowest), so you can raise one to make it more favourable o But you can’t lower one to make it less favourable o Consider setting them all to be around the middle (e.g 5) to give yourself options Default application bind setting DEFBIND o The default is “on open” o Great if you need affinity across messages, poor if you value message availability over that o Consider setting this to “not fixed”
  • 67. © 2019 IBM Corporation 67 Problem diagnosis
  • 68. © 2019 IBM Corporation Problem diagnosis Cluster queues are not visible where you think they should be, or messages sit on a transmission queue for no reason… 68 It’s often: o A break in communication between queue managers o Check the PR->FR and FR->PR channels o A config issue o Check the host of the resource and that the queue manager has correctly joined the cluster What to check: o Configuration – double check the cluster names o Queue manager cluster knowledge – are the queue managers known to the FRs? o Queue manager channels – check the PR->FR and FR->PR channels o Queue manager error logs – look for messages about expiring objects o System queues - … Sequence of diagnosis steps: o Check the queue manager where the cluster definitions live o Check all the full repositories o Check the queue manager where you’re sending the messages from
  • 69. © 2019 IBM Corporation Know your system cluster queues 69 SYSTEM.CLUSTER.COMMAND.QUEUE o Source of all work for the cluster repository process o Messages arrive here from local configuration commands and from FR->PR and PR->FR o If you delete messages from here you may need a cluster refresh o Steady state: empty SYSTEM.CLUSTER.REPOSITORY.QUEUE o Persistent store for accumulated cluster knowledge o All cluster object changes persisted here o Based on local configuration and remote knowledge o If you delete messages from here your only option is a cluster refresh o Steady state: non-empty SYSTEM.CLUSTER.TRANSMIT.* o Used to transfer all messages within a cluster o Shared between user messages and cluster control messages o If you delete messages from here you may need a cluster refresh, and you may have lost your own messages! o Steady state: Ideally empty (may contain messages whilst channels are down) SYSTEM.CLUSTER.HISTORY.QUEUE o Intended for capturing diagnostics for IBM MQ Service teams o Only used if REFRESH CLUSTER is issued o Entire contents of local cluster cache written prior to processing REFRESH o Messages expire after 180 days o Steady state: ignore it
  • 70. Thank You David Ware Chief Architect, IBM MQ dware@uk.ibm.com www.linkedin.com/in/dware1