SlideShare ist ein Scribd-Unternehmen logo
1 von 77
Downloaden Sie, um offline zu lesen
1
Hierarchical Decomposition Heuristic for Scheduling:
Coordinated Reasoning for Decentralized and
Distributed Decision-Making Problems
Jeffrey D. Kelly* and Danielle Zyngier
* Industrial Algorithms LLC., 15 St. Andrews Road, Toronto, ON, M1P 4C3, Canada
jdkelly@industrialgorithms.ca
Abstract
This paper presents a new technique for decomposing and rationalizing large decision-making
problems into a common and consistent framework. We call this the hierarchical decomposition
heuristic (HDH) which focuses on obtaining "globally feasible" solutions to the overall problem, i.e.,
solutions which are feasible for all decision-making elements in a system. The HDH is primarily
intended to be applied as a standalone tool for managing a decentralized and distributed system when
only globally consistent solutions are necessary or as a lower bound to a maximization problem within a
global optimization strategy such as Lagrangean decomposition. An industrial scale scheduling example
is presented that demonstrates the abilities of the HDH as an iterative and integrated methodology in
addition to three small motivating examples. Also illustrated is the HDH's ability to support several
types of coordinated and collaborative interactions.
Keywords:
2
Decision-making, decomposition, scheduling, coordination, collaboration, hierarchical.
1. Introduction
Decomposition is a natural way of dealing with large problems. In the process industries
decomposition is found both in company organizational charts (executive-level to director-level to
management-level) and in process related decision-making elements such as production planning and
scheduling and process optimization and control systems. Other relevant instances of organizational
hierarchies can be found in Table 1.
Decision-making systems which contain at least two hierarchical levels can be decomposed into two
layers: the coordination layer and the cooperation (or collaboration) layer (Figure 1). The term
cooperation is used when the elements in the bottom layer do not have any knowledge of each other i.e.,
they are fully separable from the perspective of information, while collaboration is used when the
elements in the bottom layer exchange information.
Surprisingly, it is often the case in practice where the hierarchical systems depicted in Figure 1 do not
contain any feedback from the bottom layer to the top layer i.e., the up arrow in Figure 1. In a
management structure, when an executive sends targets1
to his/her directors (i.e., in a feedforward
manner), the directors are not necessarily expected to return a feasibility impact of the target (i.e., no
feedback). Instead, the target is usually assumed to be fixed and not subject to change. Similarly, when a
master schedule has been calculated for the entire process plant, the individual schedulers using their
collaborating spreadsheet simulator tools have very limited means of feeding or relaying information
back to the master scheduler that the master scheduling targets for the month period cannot be achieved.
The inconsistencies in the individual schedules are later apparent as infeasibilities and inefficiencies
such as product reprocessing or the inability to meet customer order due-dates on time. Therefore,
methods are needed that can coordinate and organize the different elements of the decomposed system
1
In this context a target is similar to an executive-order or a directive and later introduced as an
objective.
3
and to manage feedback between these elements. This paper presents a novel method for integrating and
interfacing these different decision-making layers with the goal of achieving global feasibility.
In the context of this paper, models may be referred to as global when they consider the full scope of
the decision-making system in terms of both the temporal and spatial dimensions of the system such as a
fully integrated oil-refinery planning its production or manufacturing months into the future. On the
other hand, local models only consider a sub-section of the decision-making system such as the gasoline
blending area or the first few time-periods of a multiple time-period planning model. In terms of the
level of detail, we classify the global models of the coordinator into decreasing order of detail as:
 Rigorous: models that contain all of the known production constraints, variables and bounds
that exist in the local rigorous models of a system;
 Restricted: models that are partially rigorous, i.e., they include all of the detailed constraints,
variables and bounds of sections of the entire system;
 Relaxed: models that are not necessarily rigorous but contain the entire feasible region of all
cooperators with certain constraints or bounds that have either been relaxed or removed;
 Reduced: models that are not rigorous and only contain a section of the feasible region of the
cooperators but usually contain, in spirit, the entire scope of the global problem.
A more detailed discussion on the effects of the different model types in the framework of the
hierarchical decomposition heuristic (HDH) is presented in Section 3. From a scheduling perspective,
master scheduling models are often relaxed, reduced or restricted global models in that they do not
contain all of the processing, operating and maintaining details of the production2
system. In the same
context, individual scheduling models are usually local rigorous models since they contain all of the
necessary production constraints in order to accurately represent the sub-system. Scheduling models
cannot usually be global rigorous models due the unreasonable computation times that this would entail.
The concepts of global versus local and reduced/relaxed/restricted versus rigorous models are of course
2
Production corresponds to the integration of process, operations and maintenance.
4
relative. That is, a local rigorous model from a scheduling perspective may be considered as global
relaxed from a process control perspective.
Another important concept is that of "globally feasible" solutions to a distributed decision-making
problem. In the context of this paper this term refers to a decision from the coordination layer which is
feasible for all cooperators in the cooperation layer. Globally feasible solutions indicate that a consensus
has been reached between the coordination and cooperation layers as well as amongst cooperators.
An important aspect of making good decisions at the coordination level is the correct estimation or
approximation of the system's capability3
i.e., roughly how much the system can produce and how fast.
If the capability is under-estimated (i.e., sand-bagged) there is opportunity loss since the system could in
fact be more productive than expected. On the other hand, if capability is over-estimated (i.e., cherry-
picked), the expectations will be too high and infeasibilities will likely be encountered at the cooperation
layer of the system. Concomitantly, in every decomposed system there also needs to be knowledge of
what constraints should be included in each of the coordination and cooperation layers and not just what
the upper and lower ranges of the constraints should be. Therefore, we introduce the notion of public,
private, protected and plot/ploy constraints which are defined as follows:
 Public: constraints that are fully known to every element in the system (coordinator and all
cooperators). If all constraints in a system are public this indicates that the coordinator may be
a global rigorous model while the cooperators are all local rigorous models. In this scenario
only one iteration between coordinator and cooperators is needed in the system in order to
achieve a globally feasible solution since the coordinator's decisions will never violate any of
the cooperators' constraints;
 Private: constraints that are only known to the individual elements of the system (coordinator
and/or cooperators). Private constraints are very common in decomposed systems where the
coordinator does not know about all of the detailed or pedantic requirements of each
3
Capability is defined as the combination of connectivity, capacity and compatibility information.
5
cooperator and vice-versa the coordinator can have private constraints not known to the
cooperators;
 Protected: constraints that are known to the coordinator and only to a few of the cooperators.
This situation occurs when the coordinator has different levels of knowledge about each
cooperator in the system;
 Plot/Ploy: situations in which one or more cooperators join forces (i.e., collusion) to fool or
misrepresent their results back to the coordinator for self-interest reasons.
The decomposition strategies considered in this paper address systems with any combination of
public, private and protected constraints. In other words, the cooperators are considered to be authentic,
able and available elements of the system in that (1) they will not deceive the coordinator for their own
benefit, (2) they are capable of finding a feasible solution to the coordinator's requests if one exists and
(3) that they will address the coordinator's request as soon as it is made.
Decomposition however comes at a price. Even though each local rigorous model in a decomposed or
divisionalized system is smaller and thus simpler than the global rigorous one, iterations between the
coordination and cooperation layers will likely be required in order to achieve a globally feasible
solution which may increase overall solution times. In addition, unless a global optimization technique
such as Lagrangean decomposition or spatial branch-and-bound search is applied there are no guarantees
that the globally optimal balance or equilibrium of the system will be found. Nevertheless, the following
restrictions on the decision-making system constitute very compelling reasons for the application of
decomposition strategies:
 Secrecy/security: In any given decision-making system there may be private constraints (i.e.,
information that cannot be shared across the cooperators). This may be the case when different
companies are involved such as in a supply chain with outsourcing. Additionally the
cooperators may make their decisions using different operating systems or software so that
integration of their models is not straightforward. It is also virtually impossible to centralize a
system in which the cooperators use undocumented personal knowledge to make their
6
decisions (as is often the case when process simulators or spreadsheets are used to generate
schedules) unless significant investment is made in creating a detailed mathematical model.
 Support: The ability to support and maintain a centralized or monolithic decision-making
system may be too large and/or unwieldy.
 Storage: Some decision-making models contain so many variables and constraints that it is
impossible to store these models in computer memory.
 Speed: There are very large and complex decision-making models that cannot be solved in
reasonable time even though they can be stored in memory. In this situation decomposition
may be an option to reduce the computational time for obtaining good feasible solutions.
The performance of any decomposition method is highly dependent on the nature of the decision-
making system in question and on how the decomposition is configured. Defining the decomposition
strategy can be a challenge in itself and of course is highly subjective. For instance, one of the first
decisions to be made when decomposing a system is the dimension of the decomposition: should the
system be decomposed in the time domain (Kelly, 2002), in the spatial equipment domain (Kelly and
Mann, 2004), in the spatial material domain (Kelly, 2004b), or in some combination of the three
dimensions? If decomposing in the time domain, should there be two sub-problems with half of the
schedule's time horizon in each one, or should there be five sub-problems with one fifth of the
schedule's time horizon in each one? Should there be any time overlap between the sub-problems? The
answers to these questions are problem-specific and therefore the application of decomposition
strategies requires a deep understanding of the underlying decision-making system.
1.1. Centralized, Coordinated, Collaborative and Competitive Reasoning
There is no question that the most effective decision-making tool for any given system is a fully
centralized strategy that uses a global rigorous model provided it satisfies the secrecy, support, storage
and speed restrictions mentioned previously. If that is not at all possible then decomposition is required.
If decomposition is needed the best decomposition strategy is a coordinated one (Figure 1) where the
7
cooperators can work in parallel. If a coordinator does not exist, the next best approach is a
collaborative strategy in which the cooperators work together obeying a certain priority or sequence in
order to achieve conformity, consistency or consensus. The worst-case decision-making scenario is a
competitive strategy in which the cooperators compete or fight against each other in order to obtain
better individual performance as opposed to good performance of the overall system (self versus mutual-
interest). This type of scenario is somewhat approximated by a collaborative framework in which the
cooperators work in parallel or simultaneously as opposed to in series or in priority as suggested above.
In this paper coordinated and collaborative decision-making strategies are discussed and are
demonstrated in the illustrative example.
Figure 2 provides a hypothetical value statement for the four strategies. If we use a "defects versus
productivity" trade-off curve, then for the same productivity (see vertical dotted line) there are varying
levels of defects, not only along the line, but also across several lines representing the centralized,
coordinated, collaborative and competitive strategies. These lines or curves represent an operating,
production or manufacturing relationship for a particular unit, plant or enterprise. Each line can also
represent a different reasoning isotherm in the sense of how defects versus productivity changes with
varying degrees of reasoning where the centralized reasoning isotherm has the lowest amount of defects
for the same productivity as expected.
Collaborative reasoning implies that there is no coordination of the cooperators in a system. As each
cooperator finds a solution to its own decision-making problem it strives to reach a consensus with the
adjacent cooperators based on its current solution. The solution of the global decision-making problem
then depends on all cooperators across the system reaching an agreement or equilibrium in a prioritized
fashion where each cooperator only has limited knowledge of the cooperator(s) directly adjacent to
itself. It is thus clear that collaborative reasoning reaches at best a myopic conformance between
connected cooperators. As previously stated, in cases where no priorities are established a priori for the
cooperators the collaborative strategy can easily become a competitive one since the cooperator which is
the fastest at generating a feasible schedule for itself gains immediate priority over the other cooperators.
8
Therefore the cooperators will compete in speed in order to simplify their decisions given that the
cooperator that has top priority is the one that is the least constrained by the remainder of the decision-
making system.
Coordinated reasoning on the other hand contains a coordination layer with a model of the global
decision-making system albeit often a simplified one by the use of a relaxed, reduced or restricted
model. As a result the conformance between the cooperators is reached based on a global view of the
system. This may entail a reduced number of iterations between the hierarchical layers for some
problems when compared to collaborative strategies, notably when the flow paths of the interconnected
resources of the cooperators are in a convergent, divergent and/or cyclic configuration as opposed to a
simple linear chain (i.e., a flow-shop or multi-product network).
Centralized systems can be viewed as a subset of coordinated systems since any coordinated strategy
will be equivalent to a centralized one when the coordination level in Figure 1 contains a global rigorous
model. Additionally, coordinated strategies are also a superset of purely collaborative systems since the
latter consist of a set of interconnected cooperators with no coordination. Collaboration can be enforced
in a coordinated structure by assigning the same model as one of the cooperators to the coordinator.
Centralized systems do not suffer from the fact that there are the arbitrary decomposition boundaries or
interfaces. This implies that in a monolithic or centralized decision-making problem it is the
optimization solver or search engine that manages all aspects of the cooperator sub-system interactions.
Examples of these interactions are the time-delays between the supply and demand of a resource
between two cooperators and the linear and potentially non-linear relationships between two or more
different resources involving multiple cooperators. In the centrally managed problem these details are
known explicitly. However in the coordinated or collaborative managed problems these are only
implicitly known and must be handled through private/protected information only.
1.2. General Structure of Decomposed Problems
As previously shown in Figure 1, decomposed or distributed problems consist of a coordination layer
and a divisionalized cooperation layer. By analyzing previous work on decomposition of large-scale
9
optimization problems, it is possible to identify two elements within the coordination layer: price and
pole coordination. Figure 3 shows the general structure of a decomposed system. In the coordination
layer there may be price and/or pole coordination where these two elements within the coordination
layer may also exchange information amongst themselves. The coordinator or super-project sends down
poles and/or prices to all cooperating sub-projects. Once that information is evaluated by the
cooperators, feedback information is sent back to the coordinator. It should be noted that the cooperators
do not communicate with each other, i.e., there is no collusion/consorting between them. This closed-
loop procedure continues until equilibrium or a stable balance point is reached between the two
decision-making layers. Most of the decomposition approaches in the literature can be represented using
the structure in Figure 3 as will be demonstrated in Section 2.
A pole refers to information that is exchanged in a decomposed problem regarding the quantity,
quality and/or logic of an element or detail of the system. The use of the word pole is taken from Jose
(1999) and has been extended to also include the logic elements of a system. The interchange of pole4
information between the decision-making layers may be denoted with what we call a protocol or parley
(Figure 4). These protocols enable communication between the layers and manage the cyclic, closed-
loop nature of decomposition information exchange. They represent and are classified into three distinct
elements as follows:
 Resource protocols relate to extensive and intensive variables that may be exchanged between
sub-problems or cooperators such as flow-poles (e.g., number of barrels of oil per day in a
stream), component-poles (e.g., light straight-run naphtha fraction in a stream) and property-
poles (e.g., density or sulfur of a stream).
 Regulation protocols refer to extensive and intensive variables that are not usually transported
or moved across cooperators such as resources but reflect more a state or condition such as
4
Another term for pole which is perhaps more descriptive is “peg” or pole-equilibrium-guess given that
the coordinator must essentially guess what pole values will be accepted by all of the interacting
cooperators.
10
holdup-poles (e.g., number of barrels of oil in a particular storage tank) and condition-poles
(e.g., degrees Celsius in a FCC unit).
 Register protocols represent the extensive and intensive logic variables in a system and may
involve mode-poles (e.g., mode of operation of a process unit), material-poles (e.g., material-
service of a process unit) and move-poles (e.g., streams that are available for flowing material
from one process unit to the next). The former register protocols are intensive logic variables
whereas an example of an extensive logic variable would the duration or amount of time a
mode of operation is active for a particular unit.
Each protocol consists of a pole-offer to the cooperators that originates from the coordinator. The
cooperators return pole-obstacles (resource), offsets (regulation) or outages (register) to the coordinator
that indicates if the pole-offer has been accepted or not. If the cooperators accept the (feasible,
consistent) pole-offer then the pole-obstacles, -offsets or -outages are all equal to zero. The protocols
may be similarly applied to prices since these are related to adjustments of the quantity, quality and/or
logic elements of the system. From optimization theory it is known that the prices correspond to the dual
values of the poles. The adjustment of prices can be used to establish a quantity balance between supply
and demand (also known as the equilibrium price in economic theory) based on the simple economic
principle that to increase the supply of a resource its price should be increased and to increase the
demand of a resource its price should be reduced (Cheng et al. 2006).
1.3. Understanding and Managing Interactions between Cooperators
Obviously if each cooperator had no common, connected, centralized, shared, linked or global
elements then they would be completely separate and could be solved independently. Unfortunately
when there are common elements between two or more cooperators that either directly or indirectly
interact, completely separated solution approaches will only yield globally feasible solutions by chance.
A centralized decision-making strategy should have accurate and intimate knowledge of resource,
regulation and register-poles and how each is related to each other across multiple cooperators, even
11
how a resource-pole links to or affects in some deterministic and stochastic way a register-pole. As we
abstract a centralized system into a coordinated and cooperative system, details of synergistic (positively
correlated) and antagonistic (negatively correlated5
) interactions are also abstracted or abbreviated. As
such, we lose information which is proxy’d or substituted by cooperator feedback along essentially three
axes: time, linear space and nonlinear space.
More specifically, the dead-time, delay or lag of how a change in a pole affects itself over time must
be properly understood given that it is known from process control theory that dead-time estimation is a
crucial component in robust and proformant controller design. Linear spatial interactions are how one
pole’s rate of change affects another pole’s rate of change which is also known as the steady-state gain
matrix defined at very low frequencies. Well known interaction analysis techniques such as the relative
gain array (RGA) can be used to better interpret the affect and pairing of one controlled variable with
another. In addition, multivariate statistical techniques such as principle component analysis (PCA) can
also be used to regress temporal and spatial dominant correlations given a set of actual plant data and to
cluster poles that seem to have some level of interdependency. Nonlinear spatial interactions define how
at different operating, production, or manufacturing-points, nonlinear effects exist which can completely
alter the linear relationship from a different operating-point. Thus, for strongly nonlinear pole
interactions, some level of nonlinear relationship should be included in the coordinator’s restricted,
relaxed or reduced model as a best practice. One such guideline for helping with this endeavor is found
in Forbes and Marlin (1994).
From an organizational and managerial perspective, understanding and managing interactions amongst
multiple diverse people across many departments and locations is a primary mandate of any
management structure. Too much antagonistic and too little synergistic interactions can lead to chaos,
low productivity and inefficiency just to name a few. Therefore a sound approach to handle this is to
first understand the interactions and then to manage them of which restructuring, removing and
5
As a point of interest note that the base of the word correlate is relate implying a relationship.
12
realigning organizational boundaries or barriers should be at the top of the list i.e., how cooperators are
divisionalized or separated across the resource, regulation and register-protocols to achieve maximum
performance.
This paper is structured as follows: first, the existing decomposition methods available in the literature
are transformed into the coordinated decomposition strategy shown in Figure 3. Second, the hierarchical
decomposition heuristic (HDH) is presented and its contributions to other decomposition strategies are
highlighted through an industrial illustrative example and three motivating examples.
2. Previous Decomposition Approaches
The need to solve increasingly larger scheduling problems has led to significant research output in the
area of decomposition. There are broadly three types of decomposition approaches classified with
respect to its drivers: pole and/or price-directed, weight-directed and error-directed. Most of the
decomposition literature contains pole- and/or price-directed approaches while process control literature
contains almost exclusively error-directed approaches. The remainder of this section provides more
details and instances of each decomposition approach.
In the same manner that optimization algorithms can be based on primal and/or dual information,
some decomposition strategies may be classified as pole and/or price-directed. The classical examples
are the Generalized Benders decomposition (Geoffrion, 1972) and Dantzig-Wolfe decomposition
(Dantzig and Wolfe, 1960) respectively. An increasingly popular price/pole method for solving large-
scale decision-making problems is Lagrangean decomposition (Wu and Ierapetritou, 2003; Karuppiah
and Grossmann, 2006). This method finds the global optimal solution of the decomposed system within
a certain tolerance and can be represented in the general decomposition structure as shown in Figure 5
(for a maximization problem).
In this method the cooperators represent temporally and/or spatially decomposed sub-problems. The
so-called linking constraints between the cooperator sub-problems are dualized in the objective function
of the coordinator with the use of Lagrange multipliers (λ) which are updated at each iteration by using
what is known as a sub-gradient optimization method. Even though this constitutes one of the most
13
efficient global optimization strategies for decomposed problems to date, there are a few drawbacks that
limit its implementation particularly with respect to solution times. Information regarding the upper-
bound (UB) of the global rigorous (maximization) decision-making problem is needed in order to
perform the update of the Lagrange multipliers. Computational times may increase significantly if the
cooperators are themselves individually difficult to solve. Additionally, there must be a solution at each
iteration which is globally feasible over all cooperators in order to calculate the lower bound which for
practical problems is not an insignificant task. It should therefore be clear that Lagrangean
decomposition benefits from improved strategies for obtaining this lower bound, i.e., strategies which
will yield a globally feasible solution such as the one presented in this work. In Karuppiah and
Grossmann (2006) solutions which were not globally feasible for all cooperators were eliminated from
the solution set and the search continued for other solutions. Our paper presents a different strategy for
obtaining globally feasible solutions that uses pole-obstacle, offset and outage information from all
cooperators in order to resolve conflict and to achieve consensus in subsequent iterations. Note that the
coordination layer in Lagrangean decomposition contains both a pole-coordinator which calculates the
new upper bound (of the maximization problem) and a price-coordinator which calculates the new
Lagrange multipliers (also known as marginal-costs or shadow-prices) for each cooperator (Figure 5).
Decomposed systems may also be managed though price/pole-directed strategies such as auctions
which are also used in multi-agent systems6
. According to economic theory, if the demand for a certain
resource (pole) by a consumer (i.e., a downstream cooperator) is greater than expected, the price of that
resource must be increased in order to reduce its demand. On the other hand, a larger supply (i.e., from
an upstream cooperator) than expected for a given resource entails a reduction in its price in order to
reach price equilibrium. This is the basis of the auction-based decomposition method found in Jose
(1999). The schematic for what they call a slack resource auction as a coordinated strategy can be seen
in Figure 6.
6
Multi-agent systems require either an auctioneer or an administrator which act as the coordinator with
an individual agent, usually autonomous, representing a cooperator.
14
Jose and Ungar (2000) showed that an auction can find a set of prices corresponding to the global
optimum of the overall problem if it has separable and convex sub-problems with a single time-period.
In spite of defining the directionality of the price adjustment as a function of the pole-obstacles, they did
not explicitly state how to calculate the step-size of the pole and price increase or decrease mechanism.
Furthermore, since there is the exchange of prices, pole-offers and pole-obstacles (offsets, outages), this
strategy actually corresponds to a combined price/pole error-directed strategy (Jose and Ungar, 1998).
A similar strategy for price adjustment can be found in Cheng et al. (2006) where this price
adjustment is based on price-elasticity, i.e., the sensitivity of the prices with respect to the poles (Figure
7). In this figure, pole-observations correspond to the level of the pole-offer that each cooperator can
achieve. The method presented in Cheng et al. (2006) however suffers from one of the known
drawbacks of sensitivity analysis which is the assumption of a fixed active set or basis to keep the
elasticity information valid which can be excessively restrictive for some decision-making systems such
as scheduling problems. Cheng et al. (2006) studied process control problems which are assumed to be
linear and convex whereas production scheduling problems are inherently non-linear and non-convex.
The second class of decomposition methods is weight-directed strategies. An example is the iterative
aggregation and disaggregation strategy found in Jörnsten and Leisten (1995) where the constraints and
variables of the cooperators are aggregated in the coordinator. The coordinator then optimizes with this
new aggregated model and finds the next set of pole-offers that are sent to the cooperators (Figure 8). It
is also the responsibility of the coordinator to re-calculate the variable and/or constraint weights based
on the solutions from the cooperators. It should be noted that the constraint weights can be alternatively
applied to the pole-obstacles (offsets, outages) through the use of artificial or slack variables. Another
example of a somewhat related weight-directed decomposition strategy applied to large scheduling
problems can be found in Wilkinson (1996) who performed temporal aggregation on the constraints and
variables.
The third decomposition approach is based on error-directed strategies. In the context of our paper,
error-directed strategies imply the feedback of model parameters or biases (e.g., pole-obstacles, offsets,
15
outages and pole sensitivity information). One of the first error-directed approaches for decomposing
batch scheduling problems was proposed by Bassett et al. (1996). In their work they presented a method
for solving large-scale batch scheduling problems by decomposing the system into a planning model
(coordinator) and several scheduling sub-models (cooperators). The dimension of the decomposition
was mainly temporal and partially spatial i.e., the scheduling model in each cooperator considered a
shorter time-horizon than the coordinator. The coordinator consisted of a globally relaxed model
whereas each cooperator corresponded to a locally rigorous model. The strategy of Bassett et al. (1996)
can be described in the general coordinated decomposition structure as shown in Figure 9.
As can be seen in Figure 9, there is no price coordination in this strategy. Equilibrium is reached
solely by the interchange of pole-related information. Integer and capacity cuts are sequentially added to
the coordinator's model whenever nonzero pole-obstacles7
are generated by the cooperators. However,
this assumes that the cooperators' constraints are protected i.e., the cooperators must be willing to share
or expose these more detailed constraints with the coordinator when required.
Instead of only indicating to the coordinator that infeasibilities (obstacles, offsets, outages) have been
encountered by the cooperators, information from the cooperators can sometimes be used directly in the
coordinator model as parameters. This is the basis of another error-directed approach suggested by
Zhang and Zhu (2006) where intensive quantity (i.e., process yield) information generated by the
cooperators is used in the coordinator model. Interestingly, process yields also correspond to pole
sensitivity information since yields alter the rate at which the flow of a stream changes with respect to
the throughput of the unit-operation. In this approach yield constraints may be completely private, i.e.
only known to the cooperators and not to the coordinator given that the details of how these yields are
calculated are the responsibility of the cooperators and may be quite non-linear. It should also be noted
that similarly to the decomposition approach of Cheng et al. (2006), sensitivity information is used as
7
Bassett et al. (1996) refer to pole-obstacles as "excess" variables.
16
feedback in the communication between the decision-making layers but referring to different elements:
poles in Zhang and Zhu (2006) and prices in Cheng et al. (2006).
Similar to the previous approach of Bassett et al. (1996), the strategy presented by Zhang and Zhu
(2006) does not include any price coordination (Figure 10). Yet, the Zhang and Zhu (2006)
decomposition approach is only applied to a multi-site process optimization problem with a single time-
period which significantly decreases the complexity compared to multi-time-period scheduling
problems. Its effectiveness for multi-time-period and multi-resource scheduling problems remains
unverified especially when multiple time-periods introduces degeneracy and can increase convergence
times as shall be seen later.
3. Hierarchical Decomposition Heuristic (HDH) Algorithm
The hierarchical decomposition heuristic (HDH) achieves a coordinated or hierarchical equilibrium
between decision-making layers. The coordinator is responsible for enabling conflict resolution,
consensus reconciliation, continuous refinement and challenge research which will be explained in
more detail. An overview of the algorithm for the proposed error-directed HDH is seen in Figure 11.
The individual steps of the algorithm presented below will provide the details of the coordinated
reasoning procedure. It is assumed throughout the algorithm description that a scheduling problem is
being decomposed and it is solved with mixed-integer linear programming (MILP).
3.1. Step 1. Solve coordinator problem
Solve the relaxed/reduced/restricted problem in the coordination layer to provably-optimal if possible
or until a specified amount of computational time has elapsed. The primary assumption is that there are
common, shared or linking poles between the coordination and the cooperation layers managed through
the protocols. The initial lower and upper bounds on the quantity, quality and/or logic variables for the
pole-offers (i.e., for the resource, regulation and register-protocols) are exogenously supplied by the
modeler or user when initializing the algorithm. Note that for each resource, regulation and register-
17
protocol there will be as many poles as the number of time-periods in the decision-making or scheduling
horizon.
UB
k,ik,i
LB
k,i PPP 11   , NPi 1 (1)
The bounds or pole-outlines in constraint (1) are specified for the number of poles or equivalently the
number of protocols. The k subscript refers to the current iteration where the k-1 refers to the previous
iteration. These lower and upper pole-outlines are only managed by the coordinator and change at each
iteration if there are non-zero pole-obstacles, offsets and/or outages.
3.2. Step 2. Dispatch the solution of the coordinator problem
From the solution of the coordinator problem, quantity, quality and/or logic pole-offers Pi,k are
obtained and sent to all appropriate cooperators. Any variable in the coordinator and cooperator
problems that are not involved in the protocols are only known to themselves and are not externalized
(i.e., they are essentially private variables).
3.3. Step 3. Solve all cooperator sub-problems in parallel
Solve all local rigorous models using the pole-offers from the coordinator. In every cooperator two
pole-obstacles, offsets and outages are attached to each pole-offer which are called pole-obstacle, offset
and outage shortage and surplus ( 
 k,sp,iP and 
 k,sp,iP respectively). In the following equations the
expression "i  sp" denotes that pole i belongs to a particular cooperator sp of which for resource-
protocols the pole-offer is sent to two cooperators (i.e., the upstream or supplier cooperator and the
downstream or demander cooperator).
k,ik,sp,ik,sp,ik,sp,i PPPP  
, spi,NSPsp,NPi   11 (2)
The pole-obstacle, offset and outage shortage and surplus variables i.e., 
 k,sp,iP and 
 k,sp,iP must be
added to the objective function of the individual cooperators. These variables are minimized using a
suitably large weight usually an order of magnitude larger than any other term in the objective function.
Note that the shortage and surplus variables are mutually exclusive or complements to one another. In a
18
coordinated strategy the cooperators can be solved in parallel which can significantly decrease the
overall computational effort of the HDH if several CPUs are available. Additionally, the cooperators
may be solved until provably-optimal or they may be stopped after a certain amount of time has elapsed.
3.4. Step 4. Conflict Resolution
Recover and retrieve the solution of the pole-obstacles, offsets and outages of all cooperators and re-
calculate or re-adjust the pole-outlines for the pole-offers in the coordinator problem (constraint (1))
before the current iteration's coordinator problem is optimized. The pole-obstacles, offsets and outages
are the key to the HDH strategy. If all of the pole-obstacles, offsets and outages are zero then a globally
feasible solution has been found given that the coordinator and cooperators are all feasible. Else, the
HDH iterations must continue. In cases where the pole-offer Pi,k from the coordinator is sent to more
than one cooperator as is the case for a resource-protocol, the largest pole-obstacle (in absolute terms)
across the cooperators affected by that specific resource i is used.
   


  111 k,sp,i
sp
k,sp,i
sp
LB
k,i
LB
k,i PmaxSSLPmaxSSLPP , (3a)
   


  111 k,sp,i
sp
k,sp,i
sp
UB
k,i
UB
k,i PmaxSSUPmaxSSUPP , spi,NPi  1 (3b)
In equations (3a) and (3b) there are two lower and upper step-size adjustment parameters SSL and SSU
respectively which provide the added flexibility of defining different rates of change for the lower and
upper bounds in the coordinator problem. In the motivating examples and in the illustrative industrial
example these parameters were defaulted to a value of 1. The operator .max
sp
implies the maximum
over all sp. Alternatively, for some decomposed problems, the aggregating operator .
sp
sum may be used
instead.
The rationale for the re-adjustment of the pole-outlines at every iteration is essentially taken from the
notion of capacity planning or what is also known as capacitating. The idea is for the lower level
elements (cooperators) to feedback to the higher level element (coordinator) what they are capable of
doing in terms of their achievable capacity or capability.
19
In real-life decision-making problems, short-term decisions are generally more important than long-
term decisions due to the uncertainty in decision-making systems. A long decision-making horizon
increases the chances of unforeseen events to happen and possibly change future decisions. Therefore if
deemed necessary the pole-obstacles, offsets and outages can be temporally and spatially prioritized
(weighted) in the cooperators' objective functions in order to account for uncertainty. Additionally the
stopping criterion of the HDH may incorporate an equivalent prioritization strategy. Three motivating
examples are now presented to further illustrate the details of the conflict resolution step.
Motivating Example 1
The first example of conflict resolution can be seen in Figure 12a. In this figure the original (private)
bounds for R1 and R2 in both the coordinator and the cooperators are shown. The subscripts 0, 1 and 2
refer to the resources in the coordinator, cooperator 1 and cooperator 2 respectively. The objective of the
coordinator is to maximize the usage of R1 and R2 in the system. Note that R1 and R2 may correspond
to either two different resources exchanged by the cooperators or to a single resource over two time-
periods. In addition note that there is a discrepancy in the capacity for cooperator 2 R12. Due to a
hypothetical breakdown in cooperator 2, R12 cannot be processed and has an upper bound of 0 units but
this is not known to cooperator 1 and is only known to the coordinator after the addition of feedback
yielding updates to the pole-outlines.
The results for this example can be seen in Table 2. Since the objective of the coordinator is to
maximize the usage of R1 and R2 the pole-offers in the first iteration are 10 and 5 respectively. Since
the pole-offer for R1 exceeds the upper bounds of this resource by 5 units (cooperator 1) and by 10 units
(cooperator 2) two pole-obstacles are generated for the R1 pole-offer. The R2 pole-offer is feasible for
both cooperators and therefore no pole-obstacles are generated for this pole-offer. By applying the
conflict resolution step, the bounds of R10 are adjusted by 10 units (the maximum value between 5 and
10, the two pole-obstacles) whereas the bounds of R20 remain unchanged since there were no pole-
obstacles for this resource. By maximizing R1 and R2 with the new bounds, the new pole-offers for R1
20
and R2 are 0 and 5 respectively which is a globally feasible solution, i.e., no pole-obstacles are
generated for the pole-offers.
Motivating Example 2
The second example is illustrated in Figure 12b. The only difference between motivating examples 1
and 2 is the coordinator model. In motivating example 2 the coordinator must satisfy an additional
inequality constraint involving R10 and R20 which in this case makes the coordinator problem
degenerate.
Table 3 shows the results for this example. Initially the pole-offers for R1 and R2 are 5 and 5
respectively. These pole-offers satisfy all cooperator constraints except for the R1 bound in cooperator 2
which is exceeded by 5 units. For the following iteration the bounds on R10 are shifted by 5 units and
again 5 and 5 are the new pole-offers. The same pole-obstacle exists for R1 in cooperator 2 and the
bounds on R10 are once more shifted by 5 units. In the third iteration the new pole-offers are 0 and 5
which are globally feasible. It is interesting to note that because of the introduction of degeneracy in this
problem the number of iterations required to arrive at a globally feasible solution has increased.
Motivating Example 3
In this third example (Figure 12c) a disjunction or discontinuity is introduced in cooperator 1. In this
case the upper bound of R2 can be one of two functions of R1 depending on the value of R1. The
iterations for this example can be seen in Table 4. Note that the method is able to find a globally feasible
solution in three iterations even for a degenerate problem with disjunctions. This is of particular
importance in mixed-integer linear programming (MILP) problems where the integer variables express
disjunctions in the model and there is a significant amount of degeneracy especially in the time domain.
3.5. Step 5. Consensus Reconciliation
Recover from the pole-obstacles, offsets and outages, which are plus and minus deviations from the
pole-offers as determined by the cooperator optimizations, the level of the pole-offer that each
cooperator can admit or achieve – this is also referred to as the pole-observation.
21
In order to accelerate the convergence of the HDH to a globally feasible or consistent solution, a term to
minimize the deviation of the current iteration's pole-offers from the consensus between the cooperators'
pole-observations in the previous iteration is added to the objective function of the coordinator. The
consensus between the cooperators is reached by essentially averaging adjacent cooperator pole-
observations (sp and sp') which refer to the same resource i in the coordinator (pole-opinion). This is
represented by the second, averaged term in the left-hand side of equation (4). For regulation and
register-protocols there is only one cooperator involved which is different from the resource-protocol
mentioned which has both upstream and downstream cooperators.



 k,ik,i
k,'sp,ik,sp,i
k,i PP
PP
P
2
11
, 'sp,spi,NPi  1 (4)
The pole-outliers 
 k,iP and 
 k,iP then must be added to the objective function of the coordinator using a
pole-outlier weight wi which is calculated as the maximum value of the pole-outlier shortage and surplus
from the previous iteration shown in equation (5).
 


  11 k,,ik,,ii P,Pmaxw , NPi 1 (5)
At the first iteration the weights wi are set to zero which basically removes the pole-outlier minimization
for that iteration. The reason for weighting the pole-outliers by the previous iteration's maximum value
is to give more relative weight to those pole-outliers that are deviating more from the consensus or
average. If there is consensus in the previous iteration for a particular resource then the weight is zero
for the following iteration. Therefore the new objective function term to be minimized for consensus
reconciliation in the coordinator is below:
 


NP
i
k,ik,ii PPw
1
(6)
The rationale for achieving a consensus for each resource, regulation and register-protocol is related to
the notion of harmonizing pole-opinions between one or more cooperators. Given the overriding focus
of the HDH on global feasibility, finding solutions that are consistent amongst each cooperator and the
coordinator is aided by minimizing pole-outliers between the interested parties. Both Steps 4 and 5
22
represent the tradition of solving combinatorial problems with a method called greedy-constructive
search which has the charter of finding feasible solutions at the start of the search. The notion of local-
improvement search is the guiding principle for the following two coordinated reasoning methods.
3.6. Step 6. Continuous Refinement:
Steps 1 to 5 are repeated until the pole-obstacles, offsets and outages of all cooperators are zero, i.e.,
until a coordinated or hierarchical equilibrium is reached. If a different solution is desired after reaching
the equilibrium, the incumbent solution (pole-origin) and all previous incumbent solutions can be
eliminated by using the strategy outlined in Dogan and Grossmann (2006) in their constraints (36) and
(37). The algorithm may be re-started by keeping the same pole-outlines LB
k,iP and UB
k,iP as in the current
iteration or by re-initializing the pole-outlines to their original starting values or to some other value.
This reasoning element may be considered as an improvement stage of the algorithm similar to the
notion of local-improvement search found in meta-heuristics such as simulated annealing and tabu
search.
3.7. Step 7. Challenge Research
There may be cases in which there is additional information regarding a target value for the pole-
offers as specified by some higher-level system, that is, above the coordinator layer (i.e., from the
executive to the director). For example, if the coordinator is the master scheduler and the cooperators are
the individual schedulers then the planner can provide these pole-objectives (targets). The pole-
objectives have two benefits. First, they can decrease the number of iterations given that if the plan is
achievable (thus not necessarily over-optimized or “cherry-picked”) then the pole-objectives can help
with maneuvering the coordinator to find globally feasible solutions faster. In addition, these pole-
objectives can facilitate finding better globally feasible solutions given that the planner's goal is to push
the system to regions of higher efficiency, effectiveness and economy. The use of pole-objectives is also
a similar idea to a setpoint that is sent from the plant-wide optimizer to individually distributed model
predictive controllers (MPC) (Lu, 2003).
23
When applying the challenge research step the additional variables representing the deviation of the
pole-offers from the pole-objectives are called pole-opportunities and are determined as:

 k,ik,iik,i PPPPPPP , NPi 1 (7)
The additional objective function term in the coordinator is:
 


NP
i
k,ik,ii PPPPww
1
(8)
where the pole-opportunity weights iww are determined according to the same strategy as the pole-
outlier weights iw . We liken the challenge research phase to an aiming or advancement aspect of the
reasoning which is also related to a local-improvement search.
The HDH can be represented in a block diagram in a similar fashion to model predictive controllers
(MPC) of which MPC has three forms: linear, non-linear and hybrid. Figure 13 shows the coordinator as
the MPC's feedforward engine (or economic/efficiency optimizer) that sends setpoints (pole-offers) to
the cooperators which correspond to the MPC's feedback engine (or actual plant). The measured
variables of the system are the pole-observations while the predicted variables are the setpoints. The
difference between the predicted and the measured variables corresponds to what is known as the MPC's
bias terms (pole-obstacles, offsets and outages) which are fed back to the coordinator for re-
optimization. If pole-objectives exist, they are used within the coordinator's optimization problem.
When limited feedback information such as the bias updating strategy in MPCs or linear real-time
optimizers (RTOs) is used to update model parameters there is a possibility that the closed-loop system
will not converge to the optimal operating policy in the plant even for linear systems with no structural
mismatch. Forbes and Marlin (1994) were the first to demonstrate that if the parameter values of the
left-hand side constraint coefficients deviates enough from the perfect values (from the plant) the
closed-loop system may converge to a sub-optimal corner point or active set. Due to the previous
relationship shown between the HDH and MPC the conclusions in Forbes and Marlin (1994) also
indicate that since there will usually be significant parametric and structural mismatch between the
coordinator and cooperators the system may converge to sub-optimal solutions. Mismatch primarily
24
originates due to the existence of private constraints. Zyngier (2006) developed methods that monitored,
diagnosed and enhanced the performance of such types of closed-loop optimization systems by (1)
assessing the potential effects of parameter uncertainty on the objective function (determination of a
"profit gap"), (2) detecting the main parametric contributors to the profit gap using a novel sensitivity
analysis method that did not require the assumption of a fixed active set, and (3) reducing the profit gap
by applying designed experiments to the plant. The work in Zyngier (2006) could potentially be
applicable to the HDH to gauge the difference in terms of the objective function value (profit, cost, etc.)
between using globally rigorous and relaxed or reduced coordinators. Besides, the diagnostics section of
the work would focus model improvement efforts on the most significant section of the model in terms
of objective function value.
It is also possible to align the HDH directly into the related continuous-improvement philosophies or
paradigms of the Deming Wheel, Shewhart Cycle and Kaizen. We refer to this as the plan-perform-
perfect-loop or P3-loop shown in Figure 14 which has both feedforward and feedback components
(Kelly, 2005a). In the context of the HDH, both the plan and perfect functions are included in the
coordinator while the perform function embodies the cooperators. There is feedforward from the plan
function to both the perform and the perfect functions and there is a feedback loop from the perform
function through the perfect function back to the plan function. It should be emphasized that the perfect
function’s feedback to the plan function can take several forms of which updating, re-calibrating or re-
training the decision-making model in terms of both structure and parameters inside the plan function
should always be considered inside the perfect function instead of simply resetting its bounds or
constraint right-hand-sides (i.e., capacity planning or re-capacitating). The dotted rectangle represents an
input-output relation for any manufacturing or production plant, site, enterprise or system in terms of
how orders are inputted and how objects (material, information and work products) are outputted from
the system. The P3-loop is also a useful tool in analyzing and interpreting the system dynamics in terms
25
of the many contributors to the variability8
of lead-time, cycle-time, dead-time or delay of when an order
is placed by an external or internal customer and when the objects are finally received by the customer
some time into the future. In other words, the dotted rectangle is the overall system black-box or block.
Finally, the P3-loop which is essentially an organizational or managerial archetype exists at some level
or degree in all enterprises whether in the process, discrete-parts or service industries and is also at the
heart of the HDH.
The HDH algorithm outlined in Steps 1 to 7 is a method for obtaining globally feasible solutions.
While the HDH can be used independently when any globally feasible solution is equally valuable to the
decision-making system, it can also be embedded in a global optimization framework such as
Lagrangean decomposition as a means of obtaining the globally feasible lower bound of a maximization
problem at each iteration of the overall algorithm (Wu and Ierapetritou, 2003; Karuppiah and
Grossmann, 2006).
As previously mentioned it should be highlighted that the coordinator used in the HDH may be any
combination of global or local relaxed, reduced, restricted or rigorous model. Depending on the system,
the coordinator may have detailed information about one or more of the cooperators, indicating that all
of those cooperators' constraints are essentially public. In these cases, it is not necessary to include those
cooperators explicitly as sub-problems in the HDH since the pole-offers that are made by the
coordinator will always be feasible with respect to the cooperators which only contain public
constraints. This introduces a significant amount of flexibility in handling decomposed systems since
there is more freedom to manage the trade-off between the computational complexity of solving a single
global rigorous model and the several iterations resulting from the decomposition of the system into
multiple local rigorous models.
Illustrations of the joint feasibility regions of the coordinator and cooperators are shown in Figure 15.
If using a global rigorous model as a coordinator (Figure 15a) the pole-offers that are calculated by the
8
It is well-known in queuing theory that variability causes congestion and congestion causes a reduction
in capacity or capability.
26
coordinator will always be feasible for all cooperators and thus the HDH will converge in a single
iteration. On the other hand if the coordinator is partially rigorous or restricted it contains the entire
feasible solution of a sub-section of a decision-making system (i.e., cooperator 2 in Figure 15b). In this
case explicitly including cooperator 2 as an element in the cooperation layer in the HDH is optional
since all pole-offers from the coordinator will be feasible for cooperator 2. If the coordinator is a global
reduced model it may not include part of the joint feasible solution space of the cooperation layer
(Figure 15c). When the feasible set of the coordinator model contains the feasible sets of all cooperators,
i.e., when the coordinator is a global relaxed problem, the coordination is always feasible when all
cooperators are feasible (Figure 15d).
Figure 16 illustrates how the HDH can be expressed in the general decomposition framework. It
should be noted that although we use the term "weight adjustment" this is not related to the weight-
directed method previously described. The weight adjustment method is used to determine the objective
function weights for Steps 5 and 7 iw and iww respectively.
The following section presents the application of the HDH to an industrial off-site and on-site storage
and handling system. Three different decomposition strategies are applied, namely a coordinated
strategy using a relaxed coordinator model, a coordinated strategy using a restricted or partially rigorous
model and a collaborative strategy. The results are then compared with the centralized decision-making
strategy of solving the global rigorous coordinator model. Finally, the conclusions are summarized and
directions for future research are presented.
4. Illustrative Example: Off-Site and On-Site Storage and Handling System
The layout and connectivity of off-site and on-site storage tanks, pumping and blending units at a
petrochemical process industry in Asia can be seen in Figure 17. In this figure the diamonds represent
perimeter-units through which material enters or leaves the system, triangles represent pool-units that
can store material for indefinite amounts of time, and rectangles represent pipeline-units and continuous-
process-units (blenders). This system has been modeled under the unit-operation-stock superstructure
27
(UOSS) and quantity-logic quality paradigm (QLQP) described in Kelly (2004a), (2005b) and (2006)
and using the logistics inventory modeling details shown in Zyngier and Kelly (2007). Material arrives
at the off-site terminal tanks through three supply perimeter-units (S11, S12 and S13) and is directed to
one of six pool-units (T11-T16). The material is then taken through one of two pipeline-units (P1 or
P12) to the on-site facilities. After storage in pool-units T23-T24 the material is then blended in the B21
and B22 blending process-units, stored in pool-units T25 or T26 and then sent to one of two demand
perimeter-units (D21 or D22). The objective function of the scheduling problem is to maximize profit
which in this case is only a function of D21 and D22 ($10 for every kMT of material allocated to a
demand perimeter-unit), i.e. there are no costs associated with S11, S12 S13 and S21 which is not
unreasonable when feedstock is supplied at fixed amounts according to a fixed delivery schedule. The
scheduling horizon is 8-days with 24-hour time-period durations.
This system is naturally decomposable into the off-site’s area (Figure 18) and the on-site’s area (Figure
19). Note that this illustrative example demonstrates two different strategies for handling resource-
protocols between cooperators; no regulation and register-protocols are used in this example. The first
strategy is to decompose the resource-protocol across a unit which is done on pipeline-unit P1. The
second strategy is to decompose the system across a connection between units (P12 and the pool-units)
which requires us to model an additional hypothetical process-unit (P22) which does not exist physically
in our multi-site system.
In addition to the centralized (non-decomposed) approach, three different decomposition strategies
were applied to this system: a collaborative strategy and two coordinated strategies using a relaxed and a
partially rigorous coordinator model (Figure 20). The centralized approach (Figure 20a) implies that the
coordinator is a global rigorous model and therefore the HDH is guaranteed to converge in a single
iteration although it may take an unreasonable amount time to solve. Since all pole-offers will be
feasible for the cooperators it is not necessary to include the cooperators explicitly in the algorithm. In
the collaborative strategy (Figure 20b) no information other than the resource-poles is shared across the
collaborators. This strategy can be easily inserted into the HDH structure by modeling a coordinator that
28
has perfect knowledge about the constraints at the on-sites (cooperator 2) but no knowledge about the
off-sites. Therefore the coordinator can be interpreted as a local rigorous model of the on-sites only. In
the coordinated strategy with a relaxed coordinator (Figure 20c), each cooperator is only willing to share
its connectivity information (i.e., number of units and their pumping interconnections) and the upper
bounds on the pool-unit lot-sizes but no additional operational logic details such as settling-times or
lower bounds on pool-unit lot-sizes. The coordinated strategy using a partially rigorous coordinator
model (Figure 20d) contains all of the rigorous model information for cooperator 2 and only the
connectivity information of cooperator 1 in the coordination layer. Since the rigorous model for
cooperator 2 exists within the coordinator it is not necessary to include this cooperator explicitly in the
HDH since all of the pole-offers will necessarily be feasible for cooperator 2 and no feedback is
required.
The details of the modeling can be found in Table 5 to Table 13. It should be mentioned that for this
illustrative example only conflict resolution and consensus reconciliation coordinated reasoning tactics
were applied. Continuous refinement and challenge research were not used given that there were no
available higher or upper-level objectives/targets for the resource-protocols and we felt it was necessary
to show the results without further complicating the problem with incumbent elimination quantity and
logic cuts. Hence, only the greedy-constructive elements of the HDH search are presented in the
illustrative example but not the local-improvement elements.
In Table 5 to Table 8, shut-down-when-below (SDWB) refers to the logistics constraints (involving
both quantity and logic variables) of only allowing the shut-down of an operation-mode when below a
certain threshold value. The SDWB constraint can be modeled as follows (Zyngier and Kelly, 2007):
  NTtandNPLplyyXHXHXHxh tpltplplpl
SDWB
pltpp ..1..1,0,1,
maxmax
1,   (9)
where t,ppxh refers to the hold-up of physical unit pp at time-period t, t,ply is the logic variable for the
set-up of logical unit pl at time-period t, max
plXH corresponds to the maximum hold-up or inventory of
logical unit pl and SUWB
plXH is the start-up-when-below lot-size of logical pool-unit pl.
29
The fill-draw delay (FDD) (Table 5 to Table 8) represents the timing between the last filling sub-
operation and the following drawing sub-operation. An FDD of zero indicates a standing-gauge pool-
unit. In the following constraints i and j indicate the specific inlet- and outlet-ports attached to the units
that are downstream and upstream of the logical pool-unit respectively. The other ipl and jpl subscripts
refer to the outlet- and inlet-port on the logical pool-unit itself.
NTttt|NT..t..tt,yy min
pl,FDDt,ipl,joutttt,jin,jpl  , 101  (10)
Constraint (10) stipulates that a draw cannot occur until after the lower FDD elapses. There is also an
upper bound of FDD which indicates the maximum duration between the last filling and the following
drawing of stock out of the logical pool-unit as indicated below. For a more thorough description of
quantity, logic and logistics constraints encountered in inventory models of process industries the reader
should refer to Kelly (2006) and to a Zyngier and Kelly (2007).
NTttt|NT..t,yy max
pl,FDD
tt
ttt,jin,jplt,ipl,jout
max
pl,FDD
 
 

10
1
(11)
In Table 9 to Table 13 additional quantity, logic and logistics details are presented. Table 9 and Table
10 provide the semi-continuous flow logistics constraints for the outlet- and inlet-ports respectively
which model flows that can either be zero or between their lower and upper bounds within a time-
period. Table 11 shows the opening inventories, hold-ups or initial lot-size amounts in the pool-units at
the start of schedule with initial mode-operation logic setups for whether the pool-unit is in material-
operation or service A, B, C or ABC. The material balance for pool-units can be expressed as:
NT..txfxfxhxh t,joutt,jint,ppt,pp 11   (12)
where t,jinxf and t,joutxf refer to the flows entering or leaving physical unit pp at time-period t
respectively. Table 12 specifies the timing of what we call inverse-yield orders. Inverse-yields are really
recipe, intensity or proportion amounts of how much of stocks A, B and C should be mixed together in
the blending process-units; these were specified by a higher-level planning system. Table 13 shows the
supply amounts of A, B and C from the supply perimeter-units over the scheduling horizon.
30
For each of the three decomposition strategies two different demand scenarios were applied: an
aggressive scenario where all demands were fixed (Table 14) (i.e., the lower and upper bounds are
equal) and a conservative scenario where the demands had zero lower bounds (Table 15). Table 16
displays the problem statistics. These problems were generated and solved using XPRESS-MOSEL
version 1.6.3 and XPRESS-MILP version 17.10.08 with all default settings (Gueret et al., 2002). In
addition, this XPRESS-MOSEL version uses the XPRESS-MOSEL-PARALLEL which provided us with the
ability to solve problems simultaneously on different CPUs. In order to limit computation times, the
maximum time on any individual optimization problem (coordinator and cooperators) was limited to
either 30-CPU-seconds or the time to find the first integer feasible solution (whichever was the longest).
Figure 21 shows a Gantt chart with the operation modes of units throughout the scheduling horizon for
the solution from the centralized strategy. The conservative scenario allows the non-fulfillment of orders
since the lower bounds on the demands are zero thus allowing for an easier scheduling problem from a
schedule feasibility perspective. This statement is confirmed by the results in Table 17 (aggressive
scenario) and Table 18 (conservative scenario). As expected, all decomposition strategies in the
conservative scenario (Figure 22) had a significantly faster convergence time, due to requiring less HDH
iterations, than in the aggressive scenario (Figure 23). On the other hand, the overall objective function
was significantly higher in the aggressive scenario since all demands had to be fully satisfied.
In terms of the performance of the decomposition approaches the collaborative strategy found better
feasible solutions than the coordinated approaches. This is partly due to the fact that the coordinator may
under-estimate (sand-bag) the performance of the cooperators which does not occur in the collaborative
strategy. In contrast, the collaborative strategy needed more iterations than the coordinated approaches
in the more difficult aggressive scenario. It should again be highlighted that the computational
performance of the coordinated strategies is enhanced by the capability of parallelization of the
cooperator sub-problems which is not possible with a collaborative strategy given that a collaborative
approach is performed in-series or in-priority.
31
As previously mentioned, the collaborative strategy assumes that the on-sites (cooperator 2) is a
locally rigorous coordinator in the HDH and therefore the decisions made by the on-sites will have
priority over the decisions made by the off-sites (cooperator 1). This is aligned with the notion of
scheduling the bottlenecks first or focusing on the bottlenecks. The opposite collaboration strategy was
attempted where the off-sites were modeled as the coordinator and therefore its decisions had top
priority. This strategy failed to find a globally feasible solution which is easily explained by the fact that
the off-sites does not constitute a bottleneck nor is it the true driver of the decision-making in this
system. This case demonstrates that in order to successfully apply a collaborative strategy, as previously
stated, it is not only necessary to identify the correct segmentation of the decision-making system but
also the priority of its individual elements.
The use of a partially rigorous coordinator model significantly improved the speed of the HDH as
expected. In the aggressive scenario the partially rigorous coordination also provided a better globally
feasible solution than the relaxed coordinator. The centralized strategy implies the use of a global
rigorous model in the coordination layer. The first feasible solution took 10 and 4 times longer than the
collaborative strategy in the aggressive and conservative scenarios respectively. In this example the first
feasible solution of the centralized system was also the global provably-optimal one. It is interesting to
note that the global optimum was also achieved by the collaborative strategy in the aggressive approach.
This shows that while it is not guaranteed that the global optimum will be found by using the HDH, this
heuristic may in fact be able to obtain it albeit serendipitously.
5. Conclusions
The focus of this paper has been to present a heuristic which can be used to find globally feasible
solutions to usually large decentralized and distributed decision-making problems when a centralized
approach is not possible. A standardized nomenclature was established to better describe the
communication, coordination and cooperation between two hierarchical layers of a decomposed
problem. The HDH was applied to an illustrative example based on an actual industrial multi-site system
32
and was able to solve this problem faster than a centralized model of the same problem when using both
coordinated and collaborative approaches.
In addition the HDH has been contrasted with other methods structured around the notions of
price/pole-directed, weight-directed and error-directed decomposition strategies. Even though the HDH
is currently an error-directed method, future work will focus on devising weight-directed enhancements
to this heuristic using aggregation and disaggregation rules to automatically transform the global
rigorous model into both local rigorous and global relaxed/reduced/restricted models. It is expected that
such an enhancement could minimize the model-mismatch or inaccuracies introduced by the contrast
between public and private constraints. In essence, this is equivalent to using an exogenous (global
rigorous) model supplied by the modeler or scheduling analyst and programmatically generating
coordinator and cooperator endogenous models using "reduction rules".
And finally, we would like to emphasize that the HDH is only a rule-of-thumb for helping with the
diverse reasoning behind the coordination of essentially bi-level decomposition optimization problems.
For tightly bottlenecked and/or critically resource constrained problems, attention to how the
coordinator is setup and how the cooperators are separated will determine its success as is the case with
all heuristic approaches.
Nomenclature
Sets and indices
i = 1…NP number of poles
k = 1...NI number of iterations
pp= 1…NPP number of physical units
pl = 1...NPL number of logical units
sp = 1…NSP number of cooperating sub-problems
t = 1...NT number of time-periods
33
Parameters
SSL step size for lower pole-outline (bound) adjustment
SSU step size for upper pole-outline (bound) adjustment
wi weight parameters for the pole-outlier objective function terms
wwi weight parameters for the pole-opportunity objective function terms
min
plXH minimum hold-up or inventory of logical unit pl
max
plXH maximum hold-up or inventory of logical unit pl
SUWB
plXH start-up-when-below lot-size of logical pool-unit pl
min
pl,FDD minimum fill-draw delay on logical unit pl
max
pl,FDD maximum fill-draw delay on logical unit pl
Variables
Pi,k pole-offer (from coordinator) for resource, regulation and/or register i at iteration k
Pi,sp,k pole-observation for resource i at cooperating sub-problem sp at iteration k
LB
k,iP lower pole-outline (bound) of pole-offer i at iteration k
UB
k,iP upper pole-outline (bound) of pole-offer i at iteration k
iPP pole-objective i

 k,iP pole-outlier shortage for resource i in the coordinator at iteration k

 k,iP pole-outlier surplus for resource i in the coordinator at iteration k

 k,sp,iP pole-offer shortage for resource i at cooperating sub-problem sp at iteration k

 k,sp,iP pole-offer surplus for resource i at cooperating sub-problem sp at iteration k

 k,iPP pole-opportunity shortage for resource i in the coordinator at iteration k

 k,iPP pole-opportunity surplus for resource i in the coordinator at iteration k
t,jinxf flow that enters a physical unit at time-period t
t,joutxf flow that leaves a physical unit at time-period t
34
t,ppxh hold-up of physical unit pp at time-period t
t,jin,jouty logic variable for the movement from outlet-port jout to inlet-port jin at time-period t
t,ply logic variable for the set-up of logical unit pl at time-period t
 Lagrange multiplier in Lagrangean decomposition
References
Basset, M.H.; Pekny, J.F.; Reklaitis, G.V. (1996). Decomposition techniques for the solution of
large-scale scheduling problems. AIChE J., 42, 3373.
Cheng, R.; Forbes, J.F.; Yip, W.S. (2007). Price-driven coordination method for solving plant-
wide MPC problems. J. Proc. Control, 17, 429.
Dantzig, G.B.; Wolfe, P. (1960). Decomposition principle for linear programs. Oper. Res., 8, 1.
Dogan, M.E.; Grossmann, I.E. (2006). A decomposition method for the simultaneous planning and
scheduling of single-stage continuous multiproduct plants. Ind. Eng. Chem. Res., 45, 299.
Forbes, J.F.; Marlin, T.E. (1994). Model accuracy for economic optimizing controllers: the bias
update case. Comput. Chem. Eng., 18, 497.
Geoffrion, A.M. (1972). Generalized Benders decomposition. J. Optim. Theory and Appl., 10, 237.
Gueret, C.; Prins, C.; Sevaux; Heipcke S. (revisor and translator) (2002). Applications of
Optimization with Xpress-MP, Dash Optimization, Blisworh, Northan, UK..
Jemai, Z.; Karaesmen, F. (2006). Decentralized inventory control in a two-stage capacitated supply
chain. To appear in IIE Transactions.
Jörnsten, K.; Leisten, R. (1995). Decomposition and iterative aggregation in hierarchical and
decentralized planning structures. Eur. J. Oper. Res., 86, 120.
35
Jose, R.A. (1999). Ph.D. thesis, University of Pennsylvania, PA, USA.
Jose, R.A.; Ungar, L.H. (1998). Auction-Driven Coordination for Plantwide Control. Foundations
of Computer-Aided Process Operation, Snowbird, USA.
Jose, R.A.; Ungar, L.H. (2000). Pricing interprocess streams using slack auctions. AIChE J., 46,
575.
Karuppiah, R.; Grossmann, I.E. (2006). A Lagrangean based branch-and-cut algorithm for global
optimization of nonconvex mixed-integer nonlinear programs with decomposable structures.
Submitted to J. Global Optim.
Kelly, J.D. (2002). Chronological Decomposition Heuristic for Scheduling: Divide and Conquer
Method. AIChE J., 48, 2995.
Kelly, J.D. (2004a). Production Modeling for Multimodal Operations. Chemical Engineering
Progress, February, 44.
Kelly, J.D. (2004b). Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule
Approach. Honeywell Internal Technical Report.
Kelly, J.D.; Mann, J.L. (2004). Flowsheet Decomposition Heuristic for Scheduling: a Relax-and-
Fix Method. Comput. Chem. Eng., 28, 2193.
Kelly, J.D. (2005a). Modeling production-chain information. Chemical Engineering Progress,
February, 28.
Kelly, J.D. (2005b). The Unit-Operation-Stock Superstructure (UOSS) and the Quantity-Logic-
Quality Paradigm (QLQP) for Production Scheduling in the Process Industries. Proceedings of the
Multidisciplinary Conference on Scheduling Theory and Applications (MISTA), 1, 327.
36
Kelly, J.D. (2006). Logistics: the missing link in blend scheduling optimization. Hydrocarbon
Processing, June, 45.
Lu, J. (2003). Challenging control problems and emerging technologies in enterprise optimization.
Control Eng. Practice, 11, 847.
Wilkinson, S.J. (1996). Ph.D. thesis, Imperial College of Science, Technology and Medicine,
London, UK.
Wu, D.; Ierapetritou, M.G. (2003). Decomposition approaches for the efficient solution of short-
term scheduling problems. Comput. Chem. Eng., 27, 1261
Zhang, N.; Zhu, X.X. (2006). Novel modeling and decomposition strategy for total site
optimization. Comput. Chem. Eng., 30, 765.
Zyngier, D. (2006). Ph.D. thesis, McMaster University, Hamilton, ON, Canada.
Zyngier, D.; Kelly, J.D. (2007). Multi-Product Inventory Logistics Modeling in the Process
Industries. Accepted for publication as a chapter in the book from the DIMACS and ExxonMobil
Workshop on Computational Optimization and Logistics Challenges in the Enterprise (COLCE),
eds. K.C. Furman and I.E. Grossmann.
37
Coordination
Cooperation
Feedforward Feedback
Figure 1. Bi-level layers involved in decomposition.
38
Coordinated
Collaborative
Productivity
Defects
Centralized
Competitive
Figure 2. Defects versus productivity trade-off curves for different reasoning isotherms.
39
Pole
Coordinator
Price
Coordinator
Cooperator 2 Cooperator 3Cooperator 1
Figure 3. General structure of decomposed problems.
40
Resource-Protocols
Regulation-Protocols
Register-Protocols
Flow-Poles
Component-Poles
Property-Poles
Mode-Poles
Material-Poles
Move-Poles
Holdup-Poles
Condition-Poles
Protocol
Pole-offers
Pole-
obstacles, offsets
and outages
Figure 4. Types of poles and protocols.
41
Update Lagrange
Multipliers ()
Pole-offers
Upper Bound
Calculation (UB)


Cooperator 2 Cooperator 3Cooperator 1
UB
Figure 5. Lagrangean decomposition.
42
Determination of
Reference Poles
Prices Pole-obstacles,
offsets, outages
Pole-offers
Cooperator 2 Cooperator 3Cooperator 1
Price Adjustment
Figure 6. Auction-based decomposition (Jose, 1999).
43
Prices Pole-observations
Price-elasticity
(sensitivity)
No Pole
Coordination
Cooperator 2 Cooperator 3Cooperator 1
Price Adjustment
Figure 7. Auction-based decomposition with price-elasticity (Cheng et al., 2006).
44
Pole-observations
Aggregated
Model
Pole-offers
Cooperator 2 Cooperator 3Cooperator 1
No Price
Coordination
Figure 8. Iterative aggregation/disaggregation decomposition (Jörnsten and Leisten, 1995).
45
Pole-obstacles,
offsets, outages
Relaxed Global
Problem (Planning)
Cooperator 1
(Scheduling)
No Price
Coordination
Pole-offers
Cooperator 2
(Scheduling)
Cooperator 3
(Scheduling)
Figure 9. Planning to scheduling decomposition (Bassett et al., 1996).
46
Relaxed Global
Problem
Cooperator 2 Cooperator 3Cooperator 1
No Price
Coordination
Pole-sensitivity
(yields, inverse-yields)
Pole-offers
Figure 10. Yield updating decomposition (Zhang and Zhu, 2006).
47
Continuous Refinement
Yes
Stop
No
Re-adjust pole-outlines in the
coordinator based on cooperator
pole-obstacles, offsets and outages.
Are all cooperators
feasible?
No
Yes
Exclude current solution
from coordinator keeping current
or different pole-outlines.
Solve coordinator problem.
Conflict Resolution (Equations 3a, 3b)
Consensus Reconciliation (Equations 4,5,6)
pole-offers
Also minimize deviation of
pole-offers
from the pole-objectives.
Challenge Research (Equations 7, 8)
Start
Is another solution
desired?
Yes
Do pole-objectives
exist?
No
Solve cooperator sub-problems.
pole-offers
pole-obstacles,
offsets and outages
Minimize deviation of coordinator
pole-offers from cooperator pole-
observations of the previous iteration.
pole-origins
Figure 11. Overview of proposed HDH algorithm.
48
0  R10  10
0  R20  5
Coordinator
Cooperator 1 Cooperator 2
0  R11  5
0  R21  10
0  R12  0
0  R22  10
(a)
0  R10  10
0  R20  5
R10 + R20  10
Coordinator
Cooperator 1 Cooperator 2
0  R11  5
0  R21  10
0  R12  0
0  R22  10
(b)
Cooperator 1 Cooperator 2
0  R11  5
For R11  5:
R21  2* R11
For R11 > 5:
R21  0.5* R11
0  R12  20
0  R22  20
0  R10  10
0  R20  15
R1c + R20  20
(c)
Figure 12. Conflict resolution – three motivating examples.
49
CooperationCooperation
Coordinator Cooperators
Pole-offers Pole-observations
Pole-obstacles, offsets, outages
+
-
Pole-objectives
Feedforward Engine Feedback Engine
Figure 13. Model predictive control structure of the HDH.
50
Plan Perform
Perfect
Orders Objects
Feedback
Feedforward
Feedback
System Dynamics
Figure 14. The plan-perform-perfect-loop of the HDH with orders-to-objects system dynamics.
51
Coordinator
Cooperator 1
Cooperator 2
(a)
Cooperator 1
Cooperator 2
Coordinator
(b)
Cooperator 1
Cooperator 2
Coordinator
(c)
Coordinator
Cooperator 1
Cooperator 2
(d)
Figure 15. Joint feasible regions of decomposed problem with different coordinator models: (a) global
rigorous model; (b) global restricted model; (c) global reduced model; (d) global relaxed model.
52
Pole-obstacles,
offsets and outages
Coordinator
Model
Cooperator 2 Cooperator 3Cooperator 1
Pole-offers
Weight
Adjustment
Figure 16. Hierarchical decomposition heuristic.
53
P1
P12
S21S11
S12
S13
D21
D22
P22
B21
B22
T11
T12
T13
T14
T15
T16
T21
T22
T23
T24
T25
T26
Figure 17. Physical connectivity of the global multi-site problem (coordinator).
54
S11
S12
S13
P1
P12
T11
T12
T13
T14
T15
T16
Figure 18. Physical connectivity of local off-site sub-problem (cooperator 1).
55
S21
P1
D21
D22
B21
B22
T21
T22
T23
T24
T25
T26
P12
P22
Figure 19. Physical connectivity of local on-site sub-problem (cooperator 2).
56
Rigorous Cooperator 1 Rigorous Cooperator 2
No Cooperator 1 No Cooperator 2
(a)
No Cooperator 1 Rigorous Cooperator 2
Rigorous Cooperator 1 No Cooperator 2
(b)
Relaxed Cooperator 1 Relaxed Cooperator 2
Rigorous Cooperator 1 Rigorous Cooperator 2
(c)
Relaxed Cooperator 1 Rigorous Cooperator 2
Rigorous Cooperator 1 No Cooperator 2
(d)
Figure 20. Decomposition strategies: (a) centralized (no decomposition); (b) collaborative; (c)
coordinated – relaxed coordinator; (d) coordinated – partially rigorous coordinator.
57
B21
B22
P12 A A A
P1 B B
P22 A A A
T26
T25
T24 A
T23 A B
T22
T21 A B A B
T12 B C B C B A
T11 C A C B C B
T16 C B A B
T15 C A C B C A
T14 A A B A B A
T13
1 2 3 4 5 6 7 8
ABC
ABC
B
A
A B
ABC
ABC
B C
B
B A
CB
B
Figure 21. Gantt chart showing mode and material-operations – solution from centralized strategy.
58
0
5
10
15
20
25
30
35
40
45
50
0 2 4 6 8 10 12
Iterations
Pole-Obstacles. Collaborative ($898.60)
Coordinated - Relaxed ($894.60)
Coordinated - Partially Rigorous ($889.60)
.
Figure 22. Total pole-obstacles in the coordinated and collaborative strategies (conservative scenario).
59
0
5
10
15
20
25
30
35
40
45
50
0 2 4 6 8 10 12
Iterations
Pole-Obstacles.
Collaborative ($935.08)
Coordinated - Relaxed ($905.08)
Coordinated - Partially Rigorous ($929.20)
.
Figure 23. Total pole-obstacles in the coordinated and collaborative strategies (aggressive scenario).
60
Table 1. Examples of organizational and managerial hierarchies.
Top Layer Middle Layer Bottom Layer
Planner Master-Scheduler Schedulers
Board of Directors Chief-Executive Executives
Chief-Executive Executive Directors
Executive Director Managers
Director Manager Workers (white-collar)
Manager Supervisor / Lead
Workers (blue-collar), i.e., Operators,
Laborers, Developers, Tradesmen,
Journeymen, etc.
Supervisor / Lead Workers (blue-collar)
Equipment (Unit), Services (Utilities), Tools
(Utensils), Materials (Stocks) (UOSS)
Owner General Contractor Sub-Contractors
Administrator Doctor Nurses, Orderlies, Nursing Assistants, etc.
Sargent Corporal Private, Soldier, Troop, etc.
Manager Coach Players, Tem-Member
Principal Teacher Students
61
Table 2. Results – motivating example 1.
Iteration Resource- Pole-Outlines Pole-Offers Pole-Obstacles Pole-Obstacles
Protocol Coordinator Cooperator 1 Cooperator 2
1 R1 0  R10  10 10 -5 -10
R2 0  R20  5 5 0 0
2 R1 -10  R10  0 0 0 0
R2 0  R20  5 5 0 0
62
Table 3. Results – motivating example 2.
Iteration Resource- Pole-Outlines Pole-Offers Pole-Obstacles Pole-Obstacles
Protocol Coordinator Cooperator 1 Cooperator 2
1 R1 0  R10  10 5 0 -5
R2 0  R20  5 5 0 0
2 R1 -5  R10  5 5 0 -5
R2 0  R20  5 5 0 0
3 R1 -10  R10  0 0 0 0
R2 0  R20  5 5 0 0
63
Table 4. Results – motivating example 3.
Iteration Resource- Pole-Outlines Pole-Offers Pole-Obstacles Pole-Obstacles
Protocol Coordinator Cooperator 1 Cooperator 2
1 R1 0  R10  10 5 0 0
R2 0  R20 15 15 -5 0
2 R1 0  R10  10 10 0 0
R2 -5  R20  10 10 -5 0
3 R1 0  R10  10 10 0 0
R2 -10  R20  5 5 0 0
64
Table 5. Model parameters – cooperator 1.
Pools Lot-Size (kMT) SDWB FDD Continuous Charge-Size (kMT / h)
LB UB (kMT) (h) Processes LB UB
( min
plXH ) ( max
plXH ) ( SUWB
plXH ) ( min
pl,FDD )
T11 0 9 0 0
T12 0 4.2 0 0
T13 2 12.5 4 -
T14 2 12.7 4.05 0
T15 2 18.5 5.35 0
T16 2 18.5 6.6 0
65
Table 6. Model parameters – cooperator 2.
Pools Lot-Size (kMT) SDWB FDD Continuous Charge-Size (kMT / h)
LB UB (kMT) (h) Processes LB UB
( min
plXH ) ( max
plXH ) ( SUWB
plXH ) ( min
pl,FDD )
T21 1 6.2 - - B21 0.1 0.28
T22 1 6.2 - - B22 0.1 0.3
T23 1 6.2 1.5 - P1 0.25 0.32
T24 0 3.9 0 - P22 0.25 0.35
T25 0 0.35 - -
T26 0 0.35 - -
66
Table 7. Model parameters – relaxed coordinator.
Pools Lot-Size (kMT) SDWB FDD Continuous Charge-Size (kMT / h)
LB UB (kMT) (h) Processes LB UB
( min
plXH ) ( max
plXH ) ( SUWB
plXH ) ( min
pl,FDD )
T11 0 9 - - B21 0.1 0.28
T12 0 4.2 - - B22 0.1 0.3
T13 0 12.5 - - P1 0.25 0.32
T14 0 12.7 - - P22 0.25 0.35
T15 0 18.5 - -
T16 0 18.5 - -
T21 0 6.2 - -
T22 0 6.2 - -
T23 0 6.2 - -
T24 0 3.9 - -
T25 0 0.35 - -
T26 0 0.35 - -
67
Table 8. Model parameters – partially rigorous coordinator.
Pools Lot-Size (kMT) SDWB FDD Continuous Charge-Size (kMT / h)
LB UB (kMT) (h) Processes LB UB
( min
plXH ) ( max
plXH ) ( SUWB
plXH ) ( min
pl,FDD )
T11 0 9 - - B21 0.1 0.28
T12 0 4.2 - - B22 0.1 0.3
T13 0 12.5 - - P1 0.25 0.32
T14 0 12.7 - - P22 0.25 0.35
T15 0 18.5 - -
T16 0 18.5 - -
T21 1 6.2 - -
T22 1 6.2 - -
T23 1 6.2 1.5 -
T24 0 3.9 0 -
T25 0 0.35 - -
T26 0 0.35 - -
68
Table 9. Model parameters – semi-continuous flow constraints on outlet-ports.
Source- Mode-Operation Flows (kMT / h)
Port LB UB
S11 A,B,C 0.04 0.35
S12 A,B,C 0.02 0.45
S13 A,B,C 0 2
S21 A,B,C 0 0.25
T11 A,B,C 0 1.5
T12 A,B,C 0 1.5
T13 A,B,C 0 1.5
T14 A,B,C 0 1.5
T15 A,B,C 0 1.5
T16 A,B,C 0 1.5
T21 A,B,C 0 0.28
T22 A,B,C 0 0.28
T23 A,B,C 0 0.28
T24 A,B,C 0 0.38
T25 ABC 0 0.28
T26 ABC 0 0.3
B21 ABC 0 0.28
B22 ABC 0 0.3
P1 A,B,C 0 0.25
P12 A,B,C 0 0.35
P22 A,B,C - -
69
Table 10. Model parameters – semi-continuous flow constraints on inlet-ports.
Destination- Mode-Operation Flows (kMT / h)
Port LB UB
D21 ABC 0.06 0.28
D22 ABC 0.078 0.3
T11 A,B,C 0 2
T12 A,B,C 0 2
T13 A,B,C 0 2
T14 A,B,C 0 2
T15 A,B,C 0 2
T16 A,B,C 0 2
T21 A,B,C 0 0.28
T22 A,B,C 0 0.28
T23 A,B,C 0 0.28
T24 A,B,C 0 0.15
T25 ABC 0 0.28
T26 ABC 0 0.3
B21 A,B,C 0 0.28
B22 A,B,C 0 0.29
P1 A,B,C 0 0.35
P12 A,B,C 0 0.35
P22 A,B,C - -
70
Table 11. Initial lot-sizes for pool-units.
Pool-Units Initial Lot-Size Initial Material
(kMT)
T11 0 B
T12 0 B
T13 10.579 A (cooperators)
T14 12.675 B
T15 18.051 B
T16 2.484 B
T21 5.251 A
T22 5.52 B
T23 5.608 B
T24 0 B
T25 0 ABC
T26 0 ABC
71
Table 12. Inverse-yield orders for blender process-units.
Blender- Inlet- Start-Time End-Time Inverse-Yield
Unit Port (h) (h) LB UB
B21 A 0 24 0 0
24 48 0.95 1
48 192 0 0
B 0 24 0.95 1
24 48 0 0
48 192 0.95 1
0 192 0 0
C 0 192 0 0
B22 A 0 24 0.01781 0.01968
24 48 0.10124 0.11189
48 72 0.2804 0.30992
72 120 0 0
120 144 0.8282 0.91538
144 168 0.40206 0.44439
168 192 0 0
B 0 24 0.93219 1
24 48 0.84877 0.93811
48 72 0.6696 0.74008
72 120 0.95 1
120 144 0.1218 0.13462
144 168 0.54794 0.60561
168 192 0.95 1
C 0 192 0 0
72
Table 13. Supply profile for the scheduling horizon
Supply Start-Time End-Time Rates (kMT / h)
Perimeter-Unit (h) (h) LB UB
(Mode)
S11 (A) 48 164 0.1219 0.1219
S12 (B) 128 152 0.375 0.375
152 168 0.3125 0.3125
168 192 0.375 0.375
S13 (B) 78 96 1.47222 1.47222
126 138 0.41667 0.41667
S21 (B) 25 72 0.20426 0.20426
121 148 0.22833 0.22833
170 180 0.2055 0.2055
73
Table 14. Demand profile for the scheduling horizon – aggressive approach
Demand Start-Time End-Time Rates (kMT / h)
Perimeter-Unit (h) (h) LB UB
(Mode)
D21 0 48 0.18353 0.18353
(ABC) 48 72 0.18345 0.18345
72 96 0.18353 0.18353
96 120 0.18087 0.18087
120 144 0.18353 0.18353
144 168 0.20156 0.20156
168 192 0.18353 0.18353
D22 0 24 0.28546 0.28546
(ABC) 24 48 0.28061 0.28061
48 72 0.28372 0.28372
72 96 0.27328 0.27328
96 120 0.27465 0.27465
120 144 0.27525 0.27525
144 168 0.27639 0.27639
168 192 0.26302 0.26302
74
Table 15. Demand profile for the scheduling horizon – conservative approach
Demand Start-Time End-Time Rates (kMT / h)
Perimeter-Unit (h) (h) LB UB
(Mode)
D21 0 48 0 0.18353
(ABC) 48 72 0 0.18345
72 96 0 0.18353
96 120 0 0.18087
120 144 0 0.18353
144 168 0 0.20156
168 192 0 0.18353
D22 0 24 0 0.28546
(ABC) 24 48 0 0.28061
48 72 0 0.28372
72 96 0 0.27328
96 120 0 0.27465
120 144 0 0.27525
144 168 0 0.27639
168 192 0 0.26302
75
Table 16. Problem statistics (presolved values in brackets).
Model Constraints Continuous Variables Non-Zero Coefficients Binary Variables
Global rigorous coordinator 7234 (3010) 4450 (2187) 29780 (9484) 1170 (951)
Global partially rigorous coordinator 6809 (3134) 4450 (2406) 28056 (9996) 1170 (1045)
Global relaxed coordinator 6786 (2771) 4423 (2194) 27731 (8659) 1154 (923)
Local rigorous on-sites 3133 (1287) 2228 (1013) 12971 (4383) 607 (394)
Local rigorous off-sites 3598 (280) 1991 (266) 13314 (939) 488 (56)
76
Table 17. Results for the aggressive case.
Iterations Time Objective Function
(CPU s) ($)
1 6301.0 935.08
11 626.2 935.08
9 401.7 905.08
4 195.7 929.2
Approach
Centralized
Collaborative
Coordinated - Relaxed
Coordinated - Partially Rigorous
77
Table 18. Results for the conservative case.
Iterations Time Objective Function
(CPU s) ($)
1 1488.0 935.08
7 386.2 898.6
7 333.6 894.6
2 78.5 889.6
Approach
Centralized
Collaborative
Coordinated - Relaxed
Coordinated - Partially Rigorous

Weitere ähnliche Inhalte

Andere mochten auch

CONTINUOUSLY IMPROVE THE PERFORMANCE OF PLANNING AND SCHEDULING MODELS WITH P...
CONTINUOUSLY IMPROVE THE PERFORMANCE OF PLANNING AND SCHEDULING MODELS WITH P...CONTINUOUSLY IMPROVE THE PERFORMANCE OF PLANNING AND SCHEDULING MODELS WITH P...
CONTINUOUSLY IMPROVE THE PERFORMANCE OF PLANNING AND SCHEDULING MODELS WITH P...Alkis Vazacopoulos
 
Aanbesteden multiservices volgens best value
Aanbesteden multiservices volgens best valueAanbesteden multiservices volgens best value
Aanbesteden multiservices volgens best valueJeroen Van de Rijt
 
[DDBJing29]DDBJ Sequence Read Archive (DRA) の紹介
[DDBJing29]DDBJ Sequence Read Archive (DRA) の紹介[DDBJing29]DDBJ Sequence Read Archive (DRA) の紹介
[DDBJing29]DDBJ Sequence Read Archive (DRA) の紹介DNA Data Bank of Japan center
 
Презентація команди Васильківської ЗОШ І-ІІІ ступенів №2
Презентація команди Васильківської ЗОШ І-ІІІ ступенів №2Презентація команди Васильківської ЗОШ І-ІІІ ступенів №2
Презентація команди Васильківської ЗОШ І-ІІІ ступенів №208600 Vasilkov
 
Animales
AnimalesAnimales
Animalesiippll
 
Impl reference manual_for_logic_logistics
Impl reference manual_for_logic_logisticsImpl reference manual_for_logic_logistics
Impl reference manual_for_logic_logisticsAlkis Vazacopoulos
 
Smooth-and-Dive Accelerator: A Pre-MILP Primal Heuristic applied to Scheduling
Smooth-and-Dive Accelerator: A Pre-MILP Primal Heuristic applied to SchedulingSmooth-and-Dive Accelerator: A Pre-MILP Primal Heuristic applied to Scheduling
Smooth-and-Dive Accelerator: A Pre-MILP Primal Heuristic applied to SchedulingAlkis Vazacopoulos
 
Tarea equipo!
Tarea equipo!Tarea equipo!
Tarea equipo!nevarez1
 
Сибирская кухня. блюда из дичи
Сибирская кухня. блюда из дичиСибирская кухня. блюда из дичи
Сибирская кухня. блюда из дичиMontikmur24
 

Andere mochten auch (20)

CONTINUOUSLY IMPROVE THE PERFORMANCE OF PLANNING AND SCHEDULING MODELS WITH P...
CONTINUOUSLY IMPROVE THE PERFORMANCE OF PLANNING AND SCHEDULING MODELS WITH P...CONTINUOUSLY IMPROVE THE PERFORMANCE OF PLANNING AND SCHEDULING MODELS WITH P...
CONTINUOUSLY IMPROVE THE PERFORMANCE OF PLANNING AND SCHEDULING MODELS WITH P...
 
Aanbesteden multiservices volgens best value
Aanbesteden multiservices volgens best valueAanbesteden multiservices volgens best value
Aanbesteden multiservices volgens best value
 
[DDBJing29]DDBJ Sequence Read Archive (DRA) の紹介
[DDBJing29]DDBJ Sequence Read Archive (DRA) の紹介[DDBJing29]DDBJ Sequence Read Archive (DRA) の紹介
[DDBJing29]DDBJ Sequence Read Archive (DRA) の紹介
 
El nino pequeno
El nino pequenoEl nino pequeno
El nino pequeno
 
Boletinfertilidadmodificado
BoletinfertilidadmodificadoBoletinfertilidadmodificado
Boletinfertilidadmodificado
 
Spice
SpiceSpice
Spice
 
Pathang
PathangPathang
Pathang
 
CV__DEC 2016 160816
CV__DEC 2016 160816CV__DEC 2016 160816
CV__DEC 2016 160816
 
Greece
GreeceGreece
Greece
 
Ares
AresAres
Ares
 
Question 1
Question 1Question 1
Question 1
 
Презентація команди Васильківської ЗОШ І-ІІІ ступенів №2
Презентація команди Васильківської ЗОШ І-ІІІ ступенів №2Презентація команди Васильківської ЗОШ І-ІІІ ступенів №2
Презентація команди Васильківської ЗОШ І-ІІІ ступенів №2
 
Animales
AnimalesAnimales
Animales
 
Impl reference manual_for_logic_logistics
Impl reference manual_for_logic_logisticsImpl reference manual_for_logic_logistics
Impl reference manual_for_logic_logistics
 
Smooth-and-Dive Accelerator: A Pre-MILP Primal Heuristic applied to Scheduling
Smooth-and-Dive Accelerator: A Pre-MILP Primal Heuristic applied to SchedulingSmooth-and-Dive Accelerator: A Pre-MILP Primal Heuristic applied to Scheduling
Smooth-and-Dive Accelerator: A Pre-MILP Primal Heuristic applied to Scheduling
 
Tarea equipo!
Tarea equipo!Tarea equipo!
Tarea equipo!
 
Ali Saruhan
Ali SaruhanAli Saruhan
Ali Saruhan
 
Сибирская кухня. блюда из дичи
Сибирская кухня. блюда из дичиСибирская кухня. блюда из дичи
Сибирская кухня. блюда из дичи
 
Introduktion til slideshare net
Introduktion til slideshare netIntroduktion til slideshare net
Introduktion til slideshare net
 
Aqui esta ngayg
Aqui esta ngaygAqui esta ngayg
Aqui esta ngayg
 

Ähnlich wie Hierarchical Decomposition Heuristic for Scheduling: Coordinated Reasoning for Decentralized and Distributed Decision-Making Problems

Static analysis to avoid overlap of
Static analysis to avoid overlap ofStatic analysis to avoid overlap of
Static analysis to avoid overlap ofijseajournal
 
10 guiding principles
10 guiding principles10 guiding principles
10 guiding principlesSiya Shah
 
Policy overlap analysis to avoid
Policy overlap analysis to avoidPolicy overlap analysis to avoid
Policy overlap analysis to avoidcsandit
 
A Development Shell For Cooperative Problem-Solving Environments
A Development Shell For Cooperative Problem-Solving EnvironmentsA Development Shell For Cooperative Problem-Solving Environments
A Development Shell For Cooperative Problem-Solving EnvironmentsJody Sullivan
 
An Integrated Solver For Optimization Problems
An Integrated Solver For Optimization ProblemsAn Integrated Solver For Optimization Problems
An Integrated Solver For Optimization ProblemsMonica Waters
 
Classes of coupling in large complex engineering & construction projects
Classes of coupling in large complex engineering & construction projectsClasses of coupling in large complex engineering & construction projects
Classes of coupling in large complex engineering & construction projectsBob Prieto
 
Model Based Systems Thinking
Model Based Systems ThinkingModel Based Systems Thinking
Model Based Systems Thinkingsicrcw
 
Bus 304 lecture 4-intro system concepts
Bus 304 lecture 4-intro system  conceptsBus 304 lecture 4-intro system  concepts
Bus 304 lecture 4-intro system conceptseduafo
 
Solid Principles, for better cohesion and lower coupling
Solid Principles, for better cohesion and lower coupling Solid Principles, for better cohesion and lower coupling
Solid Principles, for better cohesion and lower coupling Mohammad Shawahneh
 
Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autono...
Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autono...Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autono...
Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autono...ijsc
 
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...ijsc
 
Impact of System Constraints on the Performance of Constraint Solvers in Opti...
Impact of System Constraints on the Performance of Constraint Solvers in Opti...Impact of System Constraints on the Performance of Constraint Solvers in Opti...
Impact of System Constraints on the Performance of Constraint Solvers in Opti...IRJET Journal
 
Viable System Model - Part 03
Viable System Model - Part 03Viable System Model - Part 03
Viable System Model - Part 03Ali Soleymaniha
 
Simplifying Model-Based Systems Engineering - an Implementation Journey White...
Simplifying Model-Based Systems Engineering - an Implementation Journey White...Simplifying Model-Based Systems Engineering - an Implementation Journey White...
Simplifying Model-Based Systems Engineering - an Implementation Journey White...Alex Rétif
 
Agile methods for large systems
Agile methods for large systemsAgile methods for large systems
Agile methods for large systemssommerville-videos
 

Ähnlich wie Hierarchical Decomposition Heuristic for Scheduling: Coordinated Reasoning for Decentralized and Distributed Decision-Making Problems (20)

Static analysis to avoid overlap of
Static analysis to avoid overlap ofStatic analysis to avoid overlap of
Static analysis to avoid overlap of
 
10 guiding principles
10 guiding principles10 guiding principles
10 guiding principles
 
Ap03402460251
Ap03402460251Ap03402460251
Ap03402460251
 
Policy overlap analysis to avoid
Policy overlap analysis to avoidPolicy overlap analysis to avoid
Policy overlap analysis to avoid
 
A Development Shell For Cooperative Problem-Solving Environments
A Development Shell For Cooperative Problem-Solving EnvironmentsA Development Shell For Cooperative Problem-Solving Environments
A Development Shell For Cooperative Problem-Solving Environments
 
edge of chaos
edge of chaosedge of chaos
edge of chaos
 
An Integrated Solver For Optimization Problems
An Integrated Solver For Optimization ProblemsAn Integrated Solver For Optimization Problems
An Integrated Solver For Optimization Problems
 
Classes of coupling in large complex engineering & construction projects
Classes of coupling in large complex engineering & construction projectsClasses of coupling in large complex engineering & construction projects
Classes of coupling in large complex engineering & construction projects
 
Model Based Systems Thinking
Model Based Systems ThinkingModel Based Systems Thinking
Model Based Systems Thinking
 
Assignment
AssignmentAssignment
Assignment
 
Bus 304 lecture 4-intro system concepts
Bus 304 lecture 4-intro system  conceptsBus 304 lecture 4-intro system  concepts
Bus 304 lecture 4-intro system concepts
 
Solid Principles, for better cohesion and lower coupling
Solid Principles, for better cohesion and lower coupling Solid Principles, for better cohesion and lower coupling
Solid Principles, for better cohesion and lower coupling
 
Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autono...
Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autono...Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autono...
Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autono...
 
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...
 
Impact of System Constraints on the Performance of Constraint Solvers in Opti...
Impact of System Constraints on the Performance of Constraint Solvers in Opti...Impact of System Constraints on the Performance of Constraint Solvers in Opti...
Impact of System Constraints on the Performance of Constraint Solvers in Opti...
 
Viable System Model - Part 03
Viable System Model - Part 03Viable System Model - Part 03
Viable System Model - Part 03
 
114 425-433
114 425-433114 425-433
114 425-433
 
Simplifying Model-Based Systems Engineering - an Implementation Journey White...
Simplifying Model-Based Systems Engineering - an Implementation Journey White...Simplifying Model-Based Systems Engineering - an Implementation Journey White...
Simplifying Model-Based Systems Engineering - an Implementation Journey White...
 
DebuggingMOZ04
DebuggingMOZ04DebuggingMOZ04
DebuggingMOZ04
 
Agile methods for large systems
Agile methods for large systemsAgile methods for large systems
Agile methods for large systems
 

Mehr von Alkis Vazacopoulos

Automatic Fine-tuning Xpress-MP to Solve MIP
Automatic Fine-tuning Xpress-MP to Solve MIPAutomatic Fine-tuning Xpress-MP to Solve MIP
Automatic Fine-tuning Xpress-MP to Solve MIPAlkis Vazacopoulos
 
Amazing results with ODH|CPLEX
Amazing results with ODH|CPLEXAmazing results with ODH|CPLEX
Amazing results with ODH|CPLEXAlkis Vazacopoulos
 
Bia project poster fantasy football
Bia project poster  fantasy football Bia project poster  fantasy football
Bia project poster fantasy football Alkis Vazacopoulos
 
NFL Game schedule optimization
NFL Game schedule optimization NFL Game schedule optimization
NFL Game schedule optimization Alkis Vazacopoulos
 
2017 Business Intelligence & Analytics Corporate Event Stevens Institute of T...
2017 Business Intelligence & Analytics Corporate Event Stevens Institute of T...2017 Business Intelligence & Analytics Corporate Event Stevens Institute of T...
2017 Business Intelligence & Analytics Corporate Event Stevens Institute of T...Alkis Vazacopoulos
 
Very largeoptimizationparallel
Very largeoptimizationparallelVery largeoptimizationparallel
Very largeoptimizationparallelAlkis Vazacopoulos
 
Optimization Direct: Introduction and recent case studies
Optimization Direct: Introduction and recent case studiesOptimization Direct: Introduction and recent case studies
Optimization Direct: Introduction and recent case studiesAlkis Vazacopoulos
 
Informs 2016 Solving Planning and Scheduling Problems with CPLEX
Informs 2016 Solving Planning and Scheduling Problems with CPLEX Informs 2016 Solving Planning and Scheduling Problems with CPLEX
Informs 2016 Solving Planning and Scheduling Problems with CPLEX Alkis Vazacopoulos
 
Missing-Value Handling in Dynamic Model Estimation using IMPL
Missing-Value Handling in Dynamic Model Estimation using IMPL Missing-Value Handling in Dynamic Model Estimation using IMPL
Missing-Value Handling in Dynamic Model Estimation using IMPL Alkis Vazacopoulos
 
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...Alkis Vazacopoulos
 
Industrial Modeling Service (IMS-IMPL)
Industrial Modeling Service (IMS-IMPL)Industrial Modeling Service (IMS-IMPL)
Industrial Modeling Service (IMS-IMPL)Alkis Vazacopoulos
 
Dither Signal Design Problem (DSDP) for Closed-Loop Estimation Industrial Mod...
Dither Signal Design Problem (DSDP) for Closed-Loop Estimation Industrial Mod...Dither Signal Design Problem (DSDP) for Closed-Loop Estimation Industrial Mod...
Dither Signal Design Problem (DSDP) for Closed-Loop Estimation Industrial Mod...Alkis Vazacopoulos
 
Distillation Curve Optimization Using Monotonic Interpolation
Distillation Curve Optimization Using Monotonic InterpolationDistillation Curve Optimization Using Monotonic Interpolation
Distillation Curve Optimization Using Monotonic InterpolationAlkis Vazacopoulos
 
Multi-Utility Scheduling Optimization (MUSO) Industrial Modeling Framework (M...
Multi-Utility Scheduling Optimization (MUSO) Industrial Modeling Framework (M...Multi-Utility Scheduling Optimization (MUSO) Industrial Modeling Framework (M...
Multi-Utility Scheduling Optimization (MUSO) Industrial Modeling Framework (M...Alkis Vazacopoulos
 
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB) Indust...
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB)  Indust...Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB)  Indust...
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB) Indust...Alkis Vazacopoulos
 

Mehr von Alkis Vazacopoulos (20)

Automatic Fine-tuning Xpress-MP to Solve MIP
Automatic Fine-tuning Xpress-MP to Solve MIPAutomatic Fine-tuning Xpress-MP to Solve MIP
Automatic Fine-tuning Xpress-MP to Solve MIP
 
Data mining 2004
Data mining 2004Data mining 2004
Data mining 2004
 
Amazing results with ODH|CPLEX
Amazing results with ODH|CPLEXAmazing results with ODH|CPLEX
Amazing results with ODH|CPLEX
 
Bia project poster fantasy football
Bia project poster  fantasy football Bia project poster  fantasy football
Bia project poster fantasy football
 
NFL Game schedule optimization
NFL Game schedule optimization NFL Game schedule optimization
NFL Game schedule optimization
 
2017 Business Intelligence & Analytics Corporate Event Stevens Institute of T...
2017 Business Intelligence & Analytics Corporate Event Stevens Institute of T...2017 Business Intelligence & Analytics Corporate Event Stevens Institute of T...
2017 Business Intelligence & Analytics Corporate Event Stevens Institute of T...
 
Posters 2017
Posters 2017Posters 2017
Posters 2017
 
Very largeoptimizationparallel
Very largeoptimizationparallelVery largeoptimizationparallel
Very largeoptimizationparallel
 
Retail Pricing Optimization
Retail Pricing Optimization Retail Pricing Optimization
Retail Pricing Optimization
 
Optimization Direct: Introduction and recent case studies
Optimization Direct: Introduction and recent case studiesOptimization Direct: Introduction and recent case studies
Optimization Direct: Introduction and recent case studies
 
Informs 2016 Solving Planning and Scheduling Problems with CPLEX
Informs 2016 Solving Planning and Scheduling Problems with CPLEX Informs 2016 Solving Planning and Scheduling Problems with CPLEX
Informs 2016 Solving Planning and Scheduling Problems with CPLEX
 
ODHeuristics
ODHeuristicsODHeuristics
ODHeuristics
 
Missing-Value Handling in Dynamic Model Estimation using IMPL
Missing-Value Handling in Dynamic Model Estimation using IMPL Missing-Value Handling in Dynamic Model Estimation using IMPL
Missing-Value Handling in Dynamic Model Estimation using IMPL
 
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...
 
Industrial Modeling Service (IMS-IMPL)
Industrial Modeling Service (IMS-IMPL)Industrial Modeling Service (IMS-IMPL)
Industrial Modeling Service (IMS-IMPL)
 
Dither Signal Design Problem (DSDP) for Closed-Loop Estimation Industrial Mod...
Dither Signal Design Problem (DSDP) for Closed-Loop Estimation Industrial Mod...Dither Signal Design Problem (DSDP) for Closed-Loop Estimation Industrial Mod...
Dither Signal Design Problem (DSDP) for Closed-Loop Estimation Industrial Mod...
 
Xmr im
Xmr imXmr im
Xmr im
 
Distillation Curve Optimization Using Monotonic Interpolation
Distillation Curve Optimization Using Monotonic InterpolationDistillation Curve Optimization Using Monotonic Interpolation
Distillation Curve Optimization Using Monotonic Interpolation
 
Multi-Utility Scheduling Optimization (MUSO) Industrial Modeling Framework (M...
Multi-Utility Scheduling Optimization (MUSO) Industrial Modeling Framework (M...Multi-Utility Scheduling Optimization (MUSO) Industrial Modeling Framework (M...
Multi-Utility Scheduling Optimization (MUSO) Industrial Modeling Framework (M...
 
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB) Indust...
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB)  Indust...Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB)  Indust...
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB) Indust...
 

Kürzlich hochgeladen

Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 

Kürzlich hochgeladen (20)

Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 

Hierarchical Decomposition Heuristic for Scheduling: Coordinated Reasoning for Decentralized and Distributed Decision-Making Problems

  • 1. 1 Hierarchical Decomposition Heuristic for Scheduling: Coordinated Reasoning for Decentralized and Distributed Decision-Making Problems Jeffrey D. Kelly* and Danielle Zyngier * Industrial Algorithms LLC., 15 St. Andrews Road, Toronto, ON, M1P 4C3, Canada jdkelly@industrialgorithms.ca Abstract This paper presents a new technique for decomposing and rationalizing large decision-making problems into a common and consistent framework. We call this the hierarchical decomposition heuristic (HDH) which focuses on obtaining "globally feasible" solutions to the overall problem, i.e., solutions which are feasible for all decision-making elements in a system. The HDH is primarily intended to be applied as a standalone tool for managing a decentralized and distributed system when only globally consistent solutions are necessary or as a lower bound to a maximization problem within a global optimization strategy such as Lagrangean decomposition. An industrial scale scheduling example is presented that demonstrates the abilities of the HDH as an iterative and integrated methodology in addition to three small motivating examples. Also illustrated is the HDH's ability to support several types of coordinated and collaborative interactions. Keywords:
  • 2. 2 Decision-making, decomposition, scheduling, coordination, collaboration, hierarchical. 1. Introduction Decomposition is a natural way of dealing with large problems. In the process industries decomposition is found both in company organizational charts (executive-level to director-level to management-level) and in process related decision-making elements such as production planning and scheduling and process optimization and control systems. Other relevant instances of organizational hierarchies can be found in Table 1. Decision-making systems which contain at least two hierarchical levels can be decomposed into two layers: the coordination layer and the cooperation (or collaboration) layer (Figure 1). The term cooperation is used when the elements in the bottom layer do not have any knowledge of each other i.e., they are fully separable from the perspective of information, while collaboration is used when the elements in the bottom layer exchange information. Surprisingly, it is often the case in practice where the hierarchical systems depicted in Figure 1 do not contain any feedback from the bottom layer to the top layer i.e., the up arrow in Figure 1. In a management structure, when an executive sends targets1 to his/her directors (i.e., in a feedforward manner), the directors are not necessarily expected to return a feasibility impact of the target (i.e., no feedback). Instead, the target is usually assumed to be fixed and not subject to change. Similarly, when a master schedule has been calculated for the entire process plant, the individual schedulers using their collaborating spreadsheet simulator tools have very limited means of feeding or relaying information back to the master scheduler that the master scheduling targets for the month period cannot be achieved. The inconsistencies in the individual schedules are later apparent as infeasibilities and inefficiencies such as product reprocessing or the inability to meet customer order due-dates on time. Therefore, methods are needed that can coordinate and organize the different elements of the decomposed system 1 In this context a target is similar to an executive-order or a directive and later introduced as an objective.
  • 3. 3 and to manage feedback between these elements. This paper presents a novel method for integrating and interfacing these different decision-making layers with the goal of achieving global feasibility. In the context of this paper, models may be referred to as global when they consider the full scope of the decision-making system in terms of both the temporal and spatial dimensions of the system such as a fully integrated oil-refinery planning its production or manufacturing months into the future. On the other hand, local models only consider a sub-section of the decision-making system such as the gasoline blending area or the first few time-periods of a multiple time-period planning model. In terms of the level of detail, we classify the global models of the coordinator into decreasing order of detail as:  Rigorous: models that contain all of the known production constraints, variables and bounds that exist in the local rigorous models of a system;  Restricted: models that are partially rigorous, i.e., they include all of the detailed constraints, variables and bounds of sections of the entire system;  Relaxed: models that are not necessarily rigorous but contain the entire feasible region of all cooperators with certain constraints or bounds that have either been relaxed or removed;  Reduced: models that are not rigorous and only contain a section of the feasible region of the cooperators but usually contain, in spirit, the entire scope of the global problem. A more detailed discussion on the effects of the different model types in the framework of the hierarchical decomposition heuristic (HDH) is presented in Section 3. From a scheduling perspective, master scheduling models are often relaxed, reduced or restricted global models in that they do not contain all of the processing, operating and maintaining details of the production2 system. In the same context, individual scheduling models are usually local rigorous models since they contain all of the necessary production constraints in order to accurately represent the sub-system. Scheduling models cannot usually be global rigorous models due the unreasonable computation times that this would entail. The concepts of global versus local and reduced/relaxed/restricted versus rigorous models are of course 2 Production corresponds to the integration of process, operations and maintenance.
  • 4. 4 relative. That is, a local rigorous model from a scheduling perspective may be considered as global relaxed from a process control perspective. Another important concept is that of "globally feasible" solutions to a distributed decision-making problem. In the context of this paper this term refers to a decision from the coordination layer which is feasible for all cooperators in the cooperation layer. Globally feasible solutions indicate that a consensus has been reached between the coordination and cooperation layers as well as amongst cooperators. An important aspect of making good decisions at the coordination level is the correct estimation or approximation of the system's capability3 i.e., roughly how much the system can produce and how fast. If the capability is under-estimated (i.e., sand-bagged) there is opportunity loss since the system could in fact be more productive than expected. On the other hand, if capability is over-estimated (i.e., cherry- picked), the expectations will be too high and infeasibilities will likely be encountered at the cooperation layer of the system. Concomitantly, in every decomposed system there also needs to be knowledge of what constraints should be included in each of the coordination and cooperation layers and not just what the upper and lower ranges of the constraints should be. Therefore, we introduce the notion of public, private, protected and plot/ploy constraints which are defined as follows:  Public: constraints that are fully known to every element in the system (coordinator and all cooperators). If all constraints in a system are public this indicates that the coordinator may be a global rigorous model while the cooperators are all local rigorous models. In this scenario only one iteration between coordinator and cooperators is needed in the system in order to achieve a globally feasible solution since the coordinator's decisions will never violate any of the cooperators' constraints;  Private: constraints that are only known to the individual elements of the system (coordinator and/or cooperators). Private constraints are very common in decomposed systems where the coordinator does not know about all of the detailed or pedantic requirements of each 3 Capability is defined as the combination of connectivity, capacity and compatibility information.
  • 5. 5 cooperator and vice-versa the coordinator can have private constraints not known to the cooperators;  Protected: constraints that are known to the coordinator and only to a few of the cooperators. This situation occurs when the coordinator has different levels of knowledge about each cooperator in the system;  Plot/Ploy: situations in which one or more cooperators join forces (i.e., collusion) to fool or misrepresent their results back to the coordinator for self-interest reasons. The decomposition strategies considered in this paper address systems with any combination of public, private and protected constraints. In other words, the cooperators are considered to be authentic, able and available elements of the system in that (1) they will not deceive the coordinator for their own benefit, (2) they are capable of finding a feasible solution to the coordinator's requests if one exists and (3) that they will address the coordinator's request as soon as it is made. Decomposition however comes at a price. Even though each local rigorous model in a decomposed or divisionalized system is smaller and thus simpler than the global rigorous one, iterations between the coordination and cooperation layers will likely be required in order to achieve a globally feasible solution which may increase overall solution times. In addition, unless a global optimization technique such as Lagrangean decomposition or spatial branch-and-bound search is applied there are no guarantees that the globally optimal balance or equilibrium of the system will be found. Nevertheless, the following restrictions on the decision-making system constitute very compelling reasons for the application of decomposition strategies:  Secrecy/security: In any given decision-making system there may be private constraints (i.e., information that cannot be shared across the cooperators). This may be the case when different companies are involved such as in a supply chain with outsourcing. Additionally the cooperators may make their decisions using different operating systems or software so that integration of their models is not straightforward. It is also virtually impossible to centralize a system in which the cooperators use undocumented personal knowledge to make their
  • 6. 6 decisions (as is often the case when process simulators or spreadsheets are used to generate schedules) unless significant investment is made in creating a detailed mathematical model.  Support: The ability to support and maintain a centralized or monolithic decision-making system may be too large and/or unwieldy.  Storage: Some decision-making models contain so many variables and constraints that it is impossible to store these models in computer memory.  Speed: There are very large and complex decision-making models that cannot be solved in reasonable time even though they can be stored in memory. In this situation decomposition may be an option to reduce the computational time for obtaining good feasible solutions. The performance of any decomposition method is highly dependent on the nature of the decision- making system in question and on how the decomposition is configured. Defining the decomposition strategy can be a challenge in itself and of course is highly subjective. For instance, one of the first decisions to be made when decomposing a system is the dimension of the decomposition: should the system be decomposed in the time domain (Kelly, 2002), in the spatial equipment domain (Kelly and Mann, 2004), in the spatial material domain (Kelly, 2004b), or in some combination of the three dimensions? If decomposing in the time domain, should there be two sub-problems with half of the schedule's time horizon in each one, or should there be five sub-problems with one fifth of the schedule's time horizon in each one? Should there be any time overlap between the sub-problems? The answers to these questions are problem-specific and therefore the application of decomposition strategies requires a deep understanding of the underlying decision-making system. 1.1. Centralized, Coordinated, Collaborative and Competitive Reasoning There is no question that the most effective decision-making tool for any given system is a fully centralized strategy that uses a global rigorous model provided it satisfies the secrecy, support, storage and speed restrictions mentioned previously. If that is not at all possible then decomposition is required. If decomposition is needed the best decomposition strategy is a coordinated one (Figure 1) where the
  • 7. 7 cooperators can work in parallel. If a coordinator does not exist, the next best approach is a collaborative strategy in which the cooperators work together obeying a certain priority or sequence in order to achieve conformity, consistency or consensus. The worst-case decision-making scenario is a competitive strategy in which the cooperators compete or fight against each other in order to obtain better individual performance as opposed to good performance of the overall system (self versus mutual- interest). This type of scenario is somewhat approximated by a collaborative framework in which the cooperators work in parallel or simultaneously as opposed to in series or in priority as suggested above. In this paper coordinated and collaborative decision-making strategies are discussed and are demonstrated in the illustrative example. Figure 2 provides a hypothetical value statement for the four strategies. If we use a "defects versus productivity" trade-off curve, then for the same productivity (see vertical dotted line) there are varying levels of defects, not only along the line, but also across several lines representing the centralized, coordinated, collaborative and competitive strategies. These lines or curves represent an operating, production or manufacturing relationship for a particular unit, plant or enterprise. Each line can also represent a different reasoning isotherm in the sense of how defects versus productivity changes with varying degrees of reasoning where the centralized reasoning isotherm has the lowest amount of defects for the same productivity as expected. Collaborative reasoning implies that there is no coordination of the cooperators in a system. As each cooperator finds a solution to its own decision-making problem it strives to reach a consensus with the adjacent cooperators based on its current solution. The solution of the global decision-making problem then depends on all cooperators across the system reaching an agreement or equilibrium in a prioritized fashion where each cooperator only has limited knowledge of the cooperator(s) directly adjacent to itself. It is thus clear that collaborative reasoning reaches at best a myopic conformance between connected cooperators. As previously stated, in cases where no priorities are established a priori for the cooperators the collaborative strategy can easily become a competitive one since the cooperator which is the fastest at generating a feasible schedule for itself gains immediate priority over the other cooperators.
  • 8. 8 Therefore the cooperators will compete in speed in order to simplify their decisions given that the cooperator that has top priority is the one that is the least constrained by the remainder of the decision- making system. Coordinated reasoning on the other hand contains a coordination layer with a model of the global decision-making system albeit often a simplified one by the use of a relaxed, reduced or restricted model. As a result the conformance between the cooperators is reached based on a global view of the system. This may entail a reduced number of iterations between the hierarchical layers for some problems when compared to collaborative strategies, notably when the flow paths of the interconnected resources of the cooperators are in a convergent, divergent and/or cyclic configuration as opposed to a simple linear chain (i.e., a flow-shop or multi-product network). Centralized systems can be viewed as a subset of coordinated systems since any coordinated strategy will be equivalent to a centralized one when the coordination level in Figure 1 contains a global rigorous model. Additionally, coordinated strategies are also a superset of purely collaborative systems since the latter consist of a set of interconnected cooperators with no coordination. Collaboration can be enforced in a coordinated structure by assigning the same model as one of the cooperators to the coordinator. Centralized systems do not suffer from the fact that there are the arbitrary decomposition boundaries or interfaces. This implies that in a monolithic or centralized decision-making problem it is the optimization solver or search engine that manages all aspects of the cooperator sub-system interactions. Examples of these interactions are the time-delays between the supply and demand of a resource between two cooperators and the linear and potentially non-linear relationships between two or more different resources involving multiple cooperators. In the centrally managed problem these details are known explicitly. However in the coordinated or collaborative managed problems these are only implicitly known and must be handled through private/protected information only. 1.2. General Structure of Decomposed Problems As previously shown in Figure 1, decomposed or distributed problems consist of a coordination layer and a divisionalized cooperation layer. By analyzing previous work on decomposition of large-scale
  • 9. 9 optimization problems, it is possible to identify two elements within the coordination layer: price and pole coordination. Figure 3 shows the general structure of a decomposed system. In the coordination layer there may be price and/or pole coordination where these two elements within the coordination layer may also exchange information amongst themselves. The coordinator or super-project sends down poles and/or prices to all cooperating sub-projects. Once that information is evaluated by the cooperators, feedback information is sent back to the coordinator. It should be noted that the cooperators do not communicate with each other, i.e., there is no collusion/consorting between them. This closed- loop procedure continues until equilibrium or a stable balance point is reached between the two decision-making layers. Most of the decomposition approaches in the literature can be represented using the structure in Figure 3 as will be demonstrated in Section 2. A pole refers to information that is exchanged in a decomposed problem regarding the quantity, quality and/or logic of an element or detail of the system. The use of the word pole is taken from Jose (1999) and has been extended to also include the logic elements of a system. The interchange of pole4 information between the decision-making layers may be denoted with what we call a protocol or parley (Figure 4). These protocols enable communication between the layers and manage the cyclic, closed- loop nature of decomposition information exchange. They represent and are classified into three distinct elements as follows:  Resource protocols relate to extensive and intensive variables that may be exchanged between sub-problems or cooperators such as flow-poles (e.g., number of barrels of oil per day in a stream), component-poles (e.g., light straight-run naphtha fraction in a stream) and property- poles (e.g., density or sulfur of a stream).  Regulation protocols refer to extensive and intensive variables that are not usually transported or moved across cooperators such as resources but reflect more a state or condition such as 4 Another term for pole which is perhaps more descriptive is “peg” or pole-equilibrium-guess given that the coordinator must essentially guess what pole values will be accepted by all of the interacting cooperators.
  • 10. 10 holdup-poles (e.g., number of barrels of oil in a particular storage tank) and condition-poles (e.g., degrees Celsius in a FCC unit).  Register protocols represent the extensive and intensive logic variables in a system and may involve mode-poles (e.g., mode of operation of a process unit), material-poles (e.g., material- service of a process unit) and move-poles (e.g., streams that are available for flowing material from one process unit to the next). The former register protocols are intensive logic variables whereas an example of an extensive logic variable would the duration or amount of time a mode of operation is active for a particular unit. Each protocol consists of a pole-offer to the cooperators that originates from the coordinator. The cooperators return pole-obstacles (resource), offsets (regulation) or outages (register) to the coordinator that indicates if the pole-offer has been accepted or not. If the cooperators accept the (feasible, consistent) pole-offer then the pole-obstacles, -offsets or -outages are all equal to zero. The protocols may be similarly applied to prices since these are related to adjustments of the quantity, quality and/or logic elements of the system. From optimization theory it is known that the prices correspond to the dual values of the poles. The adjustment of prices can be used to establish a quantity balance between supply and demand (also known as the equilibrium price in economic theory) based on the simple economic principle that to increase the supply of a resource its price should be increased and to increase the demand of a resource its price should be reduced (Cheng et al. 2006). 1.3. Understanding and Managing Interactions between Cooperators Obviously if each cooperator had no common, connected, centralized, shared, linked or global elements then they would be completely separate and could be solved independently. Unfortunately when there are common elements between two or more cooperators that either directly or indirectly interact, completely separated solution approaches will only yield globally feasible solutions by chance. A centralized decision-making strategy should have accurate and intimate knowledge of resource, regulation and register-poles and how each is related to each other across multiple cooperators, even
  • 11. 11 how a resource-pole links to or affects in some deterministic and stochastic way a register-pole. As we abstract a centralized system into a coordinated and cooperative system, details of synergistic (positively correlated) and antagonistic (negatively correlated5 ) interactions are also abstracted or abbreviated. As such, we lose information which is proxy’d or substituted by cooperator feedback along essentially three axes: time, linear space and nonlinear space. More specifically, the dead-time, delay or lag of how a change in a pole affects itself over time must be properly understood given that it is known from process control theory that dead-time estimation is a crucial component in robust and proformant controller design. Linear spatial interactions are how one pole’s rate of change affects another pole’s rate of change which is also known as the steady-state gain matrix defined at very low frequencies. Well known interaction analysis techniques such as the relative gain array (RGA) can be used to better interpret the affect and pairing of one controlled variable with another. In addition, multivariate statistical techniques such as principle component analysis (PCA) can also be used to regress temporal and spatial dominant correlations given a set of actual plant data and to cluster poles that seem to have some level of interdependency. Nonlinear spatial interactions define how at different operating, production, or manufacturing-points, nonlinear effects exist which can completely alter the linear relationship from a different operating-point. Thus, for strongly nonlinear pole interactions, some level of nonlinear relationship should be included in the coordinator’s restricted, relaxed or reduced model as a best practice. One such guideline for helping with this endeavor is found in Forbes and Marlin (1994). From an organizational and managerial perspective, understanding and managing interactions amongst multiple diverse people across many departments and locations is a primary mandate of any management structure. Too much antagonistic and too little synergistic interactions can lead to chaos, low productivity and inefficiency just to name a few. Therefore a sound approach to handle this is to first understand the interactions and then to manage them of which restructuring, removing and 5 As a point of interest note that the base of the word correlate is relate implying a relationship.
  • 12. 12 realigning organizational boundaries or barriers should be at the top of the list i.e., how cooperators are divisionalized or separated across the resource, regulation and register-protocols to achieve maximum performance. This paper is structured as follows: first, the existing decomposition methods available in the literature are transformed into the coordinated decomposition strategy shown in Figure 3. Second, the hierarchical decomposition heuristic (HDH) is presented and its contributions to other decomposition strategies are highlighted through an industrial illustrative example and three motivating examples. 2. Previous Decomposition Approaches The need to solve increasingly larger scheduling problems has led to significant research output in the area of decomposition. There are broadly three types of decomposition approaches classified with respect to its drivers: pole and/or price-directed, weight-directed and error-directed. Most of the decomposition literature contains pole- and/or price-directed approaches while process control literature contains almost exclusively error-directed approaches. The remainder of this section provides more details and instances of each decomposition approach. In the same manner that optimization algorithms can be based on primal and/or dual information, some decomposition strategies may be classified as pole and/or price-directed. The classical examples are the Generalized Benders decomposition (Geoffrion, 1972) and Dantzig-Wolfe decomposition (Dantzig and Wolfe, 1960) respectively. An increasingly popular price/pole method for solving large- scale decision-making problems is Lagrangean decomposition (Wu and Ierapetritou, 2003; Karuppiah and Grossmann, 2006). This method finds the global optimal solution of the decomposed system within a certain tolerance and can be represented in the general decomposition structure as shown in Figure 5 (for a maximization problem). In this method the cooperators represent temporally and/or spatially decomposed sub-problems. The so-called linking constraints between the cooperator sub-problems are dualized in the objective function of the coordinator with the use of Lagrange multipliers (λ) which are updated at each iteration by using what is known as a sub-gradient optimization method. Even though this constitutes one of the most
  • 13. 13 efficient global optimization strategies for decomposed problems to date, there are a few drawbacks that limit its implementation particularly with respect to solution times. Information regarding the upper- bound (UB) of the global rigorous (maximization) decision-making problem is needed in order to perform the update of the Lagrange multipliers. Computational times may increase significantly if the cooperators are themselves individually difficult to solve. Additionally, there must be a solution at each iteration which is globally feasible over all cooperators in order to calculate the lower bound which for practical problems is not an insignificant task. It should therefore be clear that Lagrangean decomposition benefits from improved strategies for obtaining this lower bound, i.e., strategies which will yield a globally feasible solution such as the one presented in this work. In Karuppiah and Grossmann (2006) solutions which were not globally feasible for all cooperators were eliminated from the solution set and the search continued for other solutions. Our paper presents a different strategy for obtaining globally feasible solutions that uses pole-obstacle, offset and outage information from all cooperators in order to resolve conflict and to achieve consensus in subsequent iterations. Note that the coordination layer in Lagrangean decomposition contains both a pole-coordinator which calculates the new upper bound (of the maximization problem) and a price-coordinator which calculates the new Lagrange multipliers (also known as marginal-costs or shadow-prices) for each cooperator (Figure 5). Decomposed systems may also be managed though price/pole-directed strategies such as auctions which are also used in multi-agent systems6 . According to economic theory, if the demand for a certain resource (pole) by a consumer (i.e., a downstream cooperator) is greater than expected, the price of that resource must be increased in order to reduce its demand. On the other hand, a larger supply (i.e., from an upstream cooperator) than expected for a given resource entails a reduction in its price in order to reach price equilibrium. This is the basis of the auction-based decomposition method found in Jose (1999). The schematic for what they call a slack resource auction as a coordinated strategy can be seen in Figure 6. 6 Multi-agent systems require either an auctioneer or an administrator which act as the coordinator with an individual agent, usually autonomous, representing a cooperator.
  • 14. 14 Jose and Ungar (2000) showed that an auction can find a set of prices corresponding to the global optimum of the overall problem if it has separable and convex sub-problems with a single time-period. In spite of defining the directionality of the price adjustment as a function of the pole-obstacles, they did not explicitly state how to calculate the step-size of the pole and price increase or decrease mechanism. Furthermore, since there is the exchange of prices, pole-offers and pole-obstacles (offsets, outages), this strategy actually corresponds to a combined price/pole error-directed strategy (Jose and Ungar, 1998). A similar strategy for price adjustment can be found in Cheng et al. (2006) where this price adjustment is based on price-elasticity, i.e., the sensitivity of the prices with respect to the poles (Figure 7). In this figure, pole-observations correspond to the level of the pole-offer that each cooperator can achieve. The method presented in Cheng et al. (2006) however suffers from one of the known drawbacks of sensitivity analysis which is the assumption of a fixed active set or basis to keep the elasticity information valid which can be excessively restrictive for some decision-making systems such as scheduling problems. Cheng et al. (2006) studied process control problems which are assumed to be linear and convex whereas production scheduling problems are inherently non-linear and non-convex. The second class of decomposition methods is weight-directed strategies. An example is the iterative aggregation and disaggregation strategy found in Jörnsten and Leisten (1995) where the constraints and variables of the cooperators are aggregated in the coordinator. The coordinator then optimizes with this new aggregated model and finds the next set of pole-offers that are sent to the cooperators (Figure 8). It is also the responsibility of the coordinator to re-calculate the variable and/or constraint weights based on the solutions from the cooperators. It should be noted that the constraint weights can be alternatively applied to the pole-obstacles (offsets, outages) through the use of artificial or slack variables. Another example of a somewhat related weight-directed decomposition strategy applied to large scheduling problems can be found in Wilkinson (1996) who performed temporal aggregation on the constraints and variables. The third decomposition approach is based on error-directed strategies. In the context of our paper, error-directed strategies imply the feedback of model parameters or biases (e.g., pole-obstacles, offsets,
  • 15. 15 outages and pole sensitivity information). One of the first error-directed approaches for decomposing batch scheduling problems was proposed by Bassett et al. (1996). In their work they presented a method for solving large-scale batch scheduling problems by decomposing the system into a planning model (coordinator) and several scheduling sub-models (cooperators). The dimension of the decomposition was mainly temporal and partially spatial i.e., the scheduling model in each cooperator considered a shorter time-horizon than the coordinator. The coordinator consisted of a globally relaxed model whereas each cooperator corresponded to a locally rigorous model. The strategy of Bassett et al. (1996) can be described in the general coordinated decomposition structure as shown in Figure 9. As can be seen in Figure 9, there is no price coordination in this strategy. Equilibrium is reached solely by the interchange of pole-related information. Integer and capacity cuts are sequentially added to the coordinator's model whenever nonzero pole-obstacles7 are generated by the cooperators. However, this assumes that the cooperators' constraints are protected i.e., the cooperators must be willing to share or expose these more detailed constraints with the coordinator when required. Instead of only indicating to the coordinator that infeasibilities (obstacles, offsets, outages) have been encountered by the cooperators, information from the cooperators can sometimes be used directly in the coordinator model as parameters. This is the basis of another error-directed approach suggested by Zhang and Zhu (2006) where intensive quantity (i.e., process yield) information generated by the cooperators is used in the coordinator model. Interestingly, process yields also correspond to pole sensitivity information since yields alter the rate at which the flow of a stream changes with respect to the throughput of the unit-operation. In this approach yield constraints may be completely private, i.e. only known to the cooperators and not to the coordinator given that the details of how these yields are calculated are the responsibility of the cooperators and may be quite non-linear. It should also be noted that similarly to the decomposition approach of Cheng et al. (2006), sensitivity information is used as 7 Bassett et al. (1996) refer to pole-obstacles as "excess" variables.
  • 16. 16 feedback in the communication between the decision-making layers but referring to different elements: poles in Zhang and Zhu (2006) and prices in Cheng et al. (2006). Similar to the previous approach of Bassett et al. (1996), the strategy presented by Zhang and Zhu (2006) does not include any price coordination (Figure 10). Yet, the Zhang and Zhu (2006) decomposition approach is only applied to a multi-site process optimization problem with a single time- period which significantly decreases the complexity compared to multi-time-period scheduling problems. Its effectiveness for multi-time-period and multi-resource scheduling problems remains unverified especially when multiple time-periods introduces degeneracy and can increase convergence times as shall be seen later. 3. Hierarchical Decomposition Heuristic (HDH) Algorithm The hierarchical decomposition heuristic (HDH) achieves a coordinated or hierarchical equilibrium between decision-making layers. The coordinator is responsible for enabling conflict resolution, consensus reconciliation, continuous refinement and challenge research which will be explained in more detail. An overview of the algorithm for the proposed error-directed HDH is seen in Figure 11. The individual steps of the algorithm presented below will provide the details of the coordinated reasoning procedure. It is assumed throughout the algorithm description that a scheduling problem is being decomposed and it is solved with mixed-integer linear programming (MILP). 3.1. Step 1. Solve coordinator problem Solve the relaxed/reduced/restricted problem in the coordination layer to provably-optimal if possible or until a specified amount of computational time has elapsed. The primary assumption is that there are common, shared or linking poles between the coordination and the cooperation layers managed through the protocols. The initial lower and upper bounds on the quantity, quality and/or logic variables for the pole-offers (i.e., for the resource, regulation and register-protocols) are exogenously supplied by the modeler or user when initializing the algorithm. Note that for each resource, regulation and register-
  • 17. 17 protocol there will be as many poles as the number of time-periods in the decision-making or scheduling horizon. UB k,ik,i LB k,i PPP 11   , NPi 1 (1) The bounds or pole-outlines in constraint (1) are specified for the number of poles or equivalently the number of protocols. The k subscript refers to the current iteration where the k-1 refers to the previous iteration. These lower and upper pole-outlines are only managed by the coordinator and change at each iteration if there are non-zero pole-obstacles, offsets and/or outages. 3.2. Step 2. Dispatch the solution of the coordinator problem From the solution of the coordinator problem, quantity, quality and/or logic pole-offers Pi,k are obtained and sent to all appropriate cooperators. Any variable in the coordinator and cooperator problems that are not involved in the protocols are only known to themselves and are not externalized (i.e., they are essentially private variables). 3.3. Step 3. Solve all cooperator sub-problems in parallel Solve all local rigorous models using the pole-offers from the coordinator. In every cooperator two pole-obstacles, offsets and outages are attached to each pole-offer which are called pole-obstacle, offset and outage shortage and surplus (   k,sp,iP and   k,sp,iP respectively). In the following equations the expression "i  sp" denotes that pole i belongs to a particular cooperator sp of which for resource- protocols the pole-offer is sent to two cooperators (i.e., the upstream or supplier cooperator and the downstream or demander cooperator). k,ik,sp,ik,sp,ik,sp,i PPPP   , spi,NSPsp,NPi   11 (2) The pole-obstacle, offset and outage shortage and surplus variables i.e.,   k,sp,iP and   k,sp,iP must be added to the objective function of the individual cooperators. These variables are minimized using a suitably large weight usually an order of magnitude larger than any other term in the objective function. Note that the shortage and surplus variables are mutually exclusive or complements to one another. In a
  • 18. 18 coordinated strategy the cooperators can be solved in parallel which can significantly decrease the overall computational effort of the HDH if several CPUs are available. Additionally, the cooperators may be solved until provably-optimal or they may be stopped after a certain amount of time has elapsed. 3.4. Step 4. Conflict Resolution Recover and retrieve the solution of the pole-obstacles, offsets and outages of all cooperators and re- calculate or re-adjust the pole-outlines for the pole-offers in the coordinator problem (constraint (1)) before the current iteration's coordinator problem is optimized. The pole-obstacles, offsets and outages are the key to the HDH strategy. If all of the pole-obstacles, offsets and outages are zero then a globally feasible solution has been found given that the coordinator and cooperators are all feasible. Else, the HDH iterations must continue. In cases where the pole-offer Pi,k from the coordinator is sent to more than one cooperator as is the case for a resource-protocol, the largest pole-obstacle (in absolute terms) across the cooperators affected by that specific resource i is used.         111 k,sp,i sp k,sp,i sp LB k,i LB k,i PmaxSSLPmaxSSLPP , (3a)         111 k,sp,i sp k,sp,i sp UB k,i UB k,i PmaxSSUPmaxSSUPP , spi,NPi  1 (3b) In equations (3a) and (3b) there are two lower and upper step-size adjustment parameters SSL and SSU respectively which provide the added flexibility of defining different rates of change for the lower and upper bounds in the coordinator problem. In the motivating examples and in the illustrative industrial example these parameters were defaulted to a value of 1. The operator .max sp implies the maximum over all sp. Alternatively, for some decomposed problems, the aggregating operator . sp sum may be used instead. The rationale for the re-adjustment of the pole-outlines at every iteration is essentially taken from the notion of capacity planning or what is also known as capacitating. The idea is for the lower level elements (cooperators) to feedback to the higher level element (coordinator) what they are capable of doing in terms of their achievable capacity or capability.
  • 19. 19 In real-life decision-making problems, short-term decisions are generally more important than long- term decisions due to the uncertainty in decision-making systems. A long decision-making horizon increases the chances of unforeseen events to happen and possibly change future decisions. Therefore if deemed necessary the pole-obstacles, offsets and outages can be temporally and spatially prioritized (weighted) in the cooperators' objective functions in order to account for uncertainty. Additionally the stopping criterion of the HDH may incorporate an equivalent prioritization strategy. Three motivating examples are now presented to further illustrate the details of the conflict resolution step. Motivating Example 1 The first example of conflict resolution can be seen in Figure 12a. In this figure the original (private) bounds for R1 and R2 in both the coordinator and the cooperators are shown. The subscripts 0, 1 and 2 refer to the resources in the coordinator, cooperator 1 and cooperator 2 respectively. The objective of the coordinator is to maximize the usage of R1 and R2 in the system. Note that R1 and R2 may correspond to either two different resources exchanged by the cooperators or to a single resource over two time- periods. In addition note that there is a discrepancy in the capacity for cooperator 2 R12. Due to a hypothetical breakdown in cooperator 2, R12 cannot be processed and has an upper bound of 0 units but this is not known to cooperator 1 and is only known to the coordinator after the addition of feedback yielding updates to the pole-outlines. The results for this example can be seen in Table 2. Since the objective of the coordinator is to maximize the usage of R1 and R2 the pole-offers in the first iteration are 10 and 5 respectively. Since the pole-offer for R1 exceeds the upper bounds of this resource by 5 units (cooperator 1) and by 10 units (cooperator 2) two pole-obstacles are generated for the R1 pole-offer. The R2 pole-offer is feasible for both cooperators and therefore no pole-obstacles are generated for this pole-offer. By applying the conflict resolution step, the bounds of R10 are adjusted by 10 units (the maximum value between 5 and 10, the two pole-obstacles) whereas the bounds of R20 remain unchanged since there were no pole- obstacles for this resource. By maximizing R1 and R2 with the new bounds, the new pole-offers for R1
  • 20. 20 and R2 are 0 and 5 respectively which is a globally feasible solution, i.e., no pole-obstacles are generated for the pole-offers. Motivating Example 2 The second example is illustrated in Figure 12b. The only difference between motivating examples 1 and 2 is the coordinator model. In motivating example 2 the coordinator must satisfy an additional inequality constraint involving R10 and R20 which in this case makes the coordinator problem degenerate. Table 3 shows the results for this example. Initially the pole-offers for R1 and R2 are 5 and 5 respectively. These pole-offers satisfy all cooperator constraints except for the R1 bound in cooperator 2 which is exceeded by 5 units. For the following iteration the bounds on R10 are shifted by 5 units and again 5 and 5 are the new pole-offers. The same pole-obstacle exists for R1 in cooperator 2 and the bounds on R10 are once more shifted by 5 units. In the third iteration the new pole-offers are 0 and 5 which are globally feasible. It is interesting to note that because of the introduction of degeneracy in this problem the number of iterations required to arrive at a globally feasible solution has increased. Motivating Example 3 In this third example (Figure 12c) a disjunction or discontinuity is introduced in cooperator 1. In this case the upper bound of R2 can be one of two functions of R1 depending on the value of R1. The iterations for this example can be seen in Table 4. Note that the method is able to find a globally feasible solution in three iterations even for a degenerate problem with disjunctions. This is of particular importance in mixed-integer linear programming (MILP) problems where the integer variables express disjunctions in the model and there is a significant amount of degeneracy especially in the time domain. 3.5. Step 5. Consensus Reconciliation Recover from the pole-obstacles, offsets and outages, which are plus and minus deviations from the pole-offers as determined by the cooperator optimizations, the level of the pole-offer that each cooperator can admit or achieve – this is also referred to as the pole-observation.
  • 21. 21 In order to accelerate the convergence of the HDH to a globally feasible or consistent solution, a term to minimize the deviation of the current iteration's pole-offers from the consensus between the cooperators' pole-observations in the previous iteration is added to the objective function of the coordinator. The consensus between the cooperators is reached by essentially averaging adjacent cooperator pole- observations (sp and sp') which refer to the same resource i in the coordinator (pole-opinion). This is represented by the second, averaged term in the left-hand side of equation (4). For regulation and register-protocols there is only one cooperator involved which is different from the resource-protocol mentioned which has both upstream and downstream cooperators.     k,ik,i k,'sp,ik,sp,i k,i PP PP P 2 11 , 'sp,spi,NPi  1 (4) The pole-outliers   k,iP and   k,iP then must be added to the objective function of the coordinator using a pole-outlier weight wi which is calculated as the maximum value of the pole-outlier shortage and surplus from the previous iteration shown in equation (5).       11 k,,ik,,ii P,Pmaxw , NPi 1 (5) At the first iteration the weights wi are set to zero which basically removes the pole-outlier minimization for that iteration. The reason for weighting the pole-outliers by the previous iteration's maximum value is to give more relative weight to those pole-outliers that are deviating more from the consensus or average. If there is consensus in the previous iteration for a particular resource then the weight is zero for the following iteration. Therefore the new objective function term to be minimized for consensus reconciliation in the coordinator is below:     NP i k,ik,ii PPw 1 (6) The rationale for achieving a consensus for each resource, regulation and register-protocol is related to the notion of harmonizing pole-opinions between one or more cooperators. Given the overriding focus of the HDH on global feasibility, finding solutions that are consistent amongst each cooperator and the coordinator is aided by minimizing pole-outliers between the interested parties. Both Steps 4 and 5
  • 22. 22 represent the tradition of solving combinatorial problems with a method called greedy-constructive search which has the charter of finding feasible solutions at the start of the search. The notion of local- improvement search is the guiding principle for the following two coordinated reasoning methods. 3.6. Step 6. Continuous Refinement: Steps 1 to 5 are repeated until the pole-obstacles, offsets and outages of all cooperators are zero, i.e., until a coordinated or hierarchical equilibrium is reached. If a different solution is desired after reaching the equilibrium, the incumbent solution (pole-origin) and all previous incumbent solutions can be eliminated by using the strategy outlined in Dogan and Grossmann (2006) in their constraints (36) and (37). The algorithm may be re-started by keeping the same pole-outlines LB k,iP and UB k,iP as in the current iteration or by re-initializing the pole-outlines to their original starting values or to some other value. This reasoning element may be considered as an improvement stage of the algorithm similar to the notion of local-improvement search found in meta-heuristics such as simulated annealing and tabu search. 3.7. Step 7. Challenge Research There may be cases in which there is additional information regarding a target value for the pole- offers as specified by some higher-level system, that is, above the coordinator layer (i.e., from the executive to the director). For example, if the coordinator is the master scheduler and the cooperators are the individual schedulers then the planner can provide these pole-objectives (targets). The pole- objectives have two benefits. First, they can decrease the number of iterations given that if the plan is achievable (thus not necessarily over-optimized or “cherry-picked”) then the pole-objectives can help with maneuvering the coordinator to find globally feasible solutions faster. In addition, these pole- objectives can facilitate finding better globally feasible solutions given that the planner's goal is to push the system to regions of higher efficiency, effectiveness and economy. The use of pole-objectives is also a similar idea to a setpoint that is sent from the plant-wide optimizer to individually distributed model predictive controllers (MPC) (Lu, 2003).
  • 23. 23 When applying the challenge research step the additional variables representing the deviation of the pole-offers from the pole-objectives are called pole-opportunities and are determined as:   k,ik,iik,i PPPPPPP , NPi 1 (7) The additional objective function term in the coordinator is:     NP i k,ik,ii PPPPww 1 (8) where the pole-opportunity weights iww are determined according to the same strategy as the pole- outlier weights iw . We liken the challenge research phase to an aiming or advancement aspect of the reasoning which is also related to a local-improvement search. The HDH can be represented in a block diagram in a similar fashion to model predictive controllers (MPC) of which MPC has three forms: linear, non-linear and hybrid. Figure 13 shows the coordinator as the MPC's feedforward engine (or economic/efficiency optimizer) that sends setpoints (pole-offers) to the cooperators which correspond to the MPC's feedback engine (or actual plant). The measured variables of the system are the pole-observations while the predicted variables are the setpoints. The difference between the predicted and the measured variables corresponds to what is known as the MPC's bias terms (pole-obstacles, offsets and outages) which are fed back to the coordinator for re- optimization. If pole-objectives exist, they are used within the coordinator's optimization problem. When limited feedback information such as the bias updating strategy in MPCs or linear real-time optimizers (RTOs) is used to update model parameters there is a possibility that the closed-loop system will not converge to the optimal operating policy in the plant even for linear systems with no structural mismatch. Forbes and Marlin (1994) were the first to demonstrate that if the parameter values of the left-hand side constraint coefficients deviates enough from the perfect values (from the plant) the closed-loop system may converge to a sub-optimal corner point or active set. Due to the previous relationship shown between the HDH and MPC the conclusions in Forbes and Marlin (1994) also indicate that since there will usually be significant parametric and structural mismatch between the coordinator and cooperators the system may converge to sub-optimal solutions. Mismatch primarily
  • 24. 24 originates due to the existence of private constraints. Zyngier (2006) developed methods that monitored, diagnosed and enhanced the performance of such types of closed-loop optimization systems by (1) assessing the potential effects of parameter uncertainty on the objective function (determination of a "profit gap"), (2) detecting the main parametric contributors to the profit gap using a novel sensitivity analysis method that did not require the assumption of a fixed active set, and (3) reducing the profit gap by applying designed experiments to the plant. The work in Zyngier (2006) could potentially be applicable to the HDH to gauge the difference in terms of the objective function value (profit, cost, etc.) between using globally rigorous and relaxed or reduced coordinators. Besides, the diagnostics section of the work would focus model improvement efforts on the most significant section of the model in terms of objective function value. It is also possible to align the HDH directly into the related continuous-improvement philosophies or paradigms of the Deming Wheel, Shewhart Cycle and Kaizen. We refer to this as the plan-perform- perfect-loop or P3-loop shown in Figure 14 which has both feedforward and feedback components (Kelly, 2005a). In the context of the HDH, both the plan and perfect functions are included in the coordinator while the perform function embodies the cooperators. There is feedforward from the plan function to both the perform and the perfect functions and there is a feedback loop from the perform function through the perfect function back to the plan function. It should be emphasized that the perfect function’s feedback to the plan function can take several forms of which updating, re-calibrating or re- training the decision-making model in terms of both structure and parameters inside the plan function should always be considered inside the perfect function instead of simply resetting its bounds or constraint right-hand-sides (i.e., capacity planning or re-capacitating). The dotted rectangle represents an input-output relation for any manufacturing or production plant, site, enterprise or system in terms of how orders are inputted and how objects (material, information and work products) are outputted from the system. The P3-loop is also a useful tool in analyzing and interpreting the system dynamics in terms
  • 25. 25 of the many contributors to the variability8 of lead-time, cycle-time, dead-time or delay of when an order is placed by an external or internal customer and when the objects are finally received by the customer some time into the future. In other words, the dotted rectangle is the overall system black-box or block. Finally, the P3-loop which is essentially an organizational or managerial archetype exists at some level or degree in all enterprises whether in the process, discrete-parts or service industries and is also at the heart of the HDH. The HDH algorithm outlined in Steps 1 to 7 is a method for obtaining globally feasible solutions. While the HDH can be used independently when any globally feasible solution is equally valuable to the decision-making system, it can also be embedded in a global optimization framework such as Lagrangean decomposition as a means of obtaining the globally feasible lower bound of a maximization problem at each iteration of the overall algorithm (Wu and Ierapetritou, 2003; Karuppiah and Grossmann, 2006). As previously mentioned it should be highlighted that the coordinator used in the HDH may be any combination of global or local relaxed, reduced, restricted or rigorous model. Depending on the system, the coordinator may have detailed information about one or more of the cooperators, indicating that all of those cooperators' constraints are essentially public. In these cases, it is not necessary to include those cooperators explicitly as sub-problems in the HDH since the pole-offers that are made by the coordinator will always be feasible with respect to the cooperators which only contain public constraints. This introduces a significant amount of flexibility in handling decomposed systems since there is more freedom to manage the trade-off between the computational complexity of solving a single global rigorous model and the several iterations resulting from the decomposition of the system into multiple local rigorous models. Illustrations of the joint feasibility regions of the coordinator and cooperators are shown in Figure 15. If using a global rigorous model as a coordinator (Figure 15a) the pole-offers that are calculated by the 8 It is well-known in queuing theory that variability causes congestion and congestion causes a reduction in capacity or capability.
  • 26. 26 coordinator will always be feasible for all cooperators and thus the HDH will converge in a single iteration. On the other hand if the coordinator is partially rigorous or restricted it contains the entire feasible solution of a sub-section of a decision-making system (i.e., cooperator 2 in Figure 15b). In this case explicitly including cooperator 2 as an element in the cooperation layer in the HDH is optional since all pole-offers from the coordinator will be feasible for cooperator 2. If the coordinator is a global reduced model it may not include part of the joint feasible solution space of the cooperation layer (Figure 15c). When the feasible set of the coordinator model contains the feasible sets of all cooperators, i.e., when the coordinator is a global relaxed problem, the coordination is always feasible when all cooperators are feasible (Figure 15d). Figure 16 illustrates how the HDH can be expressed in the general decomposition framework. It should be noted that although we use the term "weight adjustment" this is not related to the weight- directed method previously described. The weight adjustment method is used to determine the objective function weights for Steps 5 and 7 iw and iww respectively. The following section presents the application of the HDH to an industrial off-site and on-site storage and handling system. Three different decomposition strategies are applied, namely a coordinated strategy using a relaxed coordinator model, a coordinated strategy using a restricted or partially rigorous model and a collaborative strategy. The results are then compared with the centralized decision-making strategy of solving the global rigorous coordinator model. Finally, the conclusions are summarized and directions for future research are presented. 4. Illustrative Example: Off-Site and On-Site Storage and Handling System The layout and connectivity of off-site and on-site storage tanks, pumping and blending units at a petrochemical process industry in Asia can be seen in Figure 17. In this figure the diamonds represent perimeter-units through which material enters or leaves the system, triangles represent pool-units that can store material for indefinite amounts of time, and rectangles represent pipeline-units and continuous- process-units (blenders). This system has been modeled under the unit-operation-stock superstructure
  • 27. 27 (UOSS) and quantity-logic quality paradigm (QLQP) described in Kelly (2004a), (2005b) and (2006) and using the logistics inventory modeling details shown in Zyngier and Kelly (2007). Material arrives at the off-site terminal tanks through three supply perimeter-units (S11, S12 and S13) and is directed to one of six pool-units (T11-T16). The material is then taken through one of two pipeline-units (P1 or P12) to the on-site facilities. After storage in pool-units T23-T24 the material is then blended in the B21 and B22 blending process-units, stored in pool-units T25 or T26 and then sent to one of two demand perimeter-units (D21 or D22). The objective function of the scheduling problem is to maximize profit which in this case is only a function of D21 and D22 ($10 for every kMT of material allocated to a demand perimeter-unit), i.e. there are no costs associated with S11, S12 S13 and S21 which is not unreasonable when feedstock is supplied at fixed amounts according to a fixed delivery schedule. The scheduling horizon is 8-days with 24-hour time-period durations. This system is naturally decomposable into the off-site’s area (Figure 18) and the on-site’s area (Figure 19). Note that this illustrative example demonstrates two different strategies for handling resource- protocols between cooperators; no regulation and register-protocols are used in this example. The first strategy is to decompose the resource-protocol across a unit which is done on pipeline-unit P1. The second strategy is to decompose the system across a connection between units (P12 and the pool-units) which requires us to model an additional hypothetical process-unit (P22) which does not exist physically in our multi-site system. In addition to the centralized (non-decomposed) approach, three different decomposition strategies were applied to this system: a collaborative strategy and two coordinated strategies using a relaxed and a partially rigorous coordinator model (Figure 20). The centralized approach (Figure 20a) implies that the coordinator is a global rigorous model and therefore the HDH is guaranteed to converge in a single iteration although it may take an unreasonable amount time to solve. Since all pole-offers will be feasible for the cooperators it is not necessary to include the cooperators explicitly in the algorithm. In the collaborative strategy (Figure 20b) no information other than the resource-poles is shared across the collaborators. This strategy can be easily inserted into the HDH structure by modeling a coordinator that
  • 28. 28 has perfect knowledge about the constraints at the on-sites (cooperator 2) but no knowledge about the off-sites. Therefore the coordinator can be interpreted as a local rigorous model of the on-sites only. In the coordinated strategy with a relaxed coordinator (Figure 20c), each cooperator is only willing to share its connectivity information (i.e., number of units and their pumping interconnections) and the upper bounds on the pool-unit lot-sizes but no additional operational logic details such as settling-times or lower bounds on pool-unit lot-sizes. The coordinated strategy using a partially rigorous coordinator model (Figure 20d) contains all of the rigorous model information for cooperator 2 and only the connectivity information of cooperator 1 in the coordination layer. Since the rigorous model for cooperator 2 exists within the coordinator it is not necessary to include this cooperator explicitly in the HDH since all of the pole-offers will necessarily be feasible for cooperator 2 and no feedback is required. The details of the modeling can be found in Table 5 to Table 13. It should be mentioned that for this illustrative example only conflict resolution and consensus reconciliation coordinated reasoning tactics were applied. Continuous refinement and challenge research were not used given that there were no available higher or upper-level objectives/targets for the resource-protocols and we felt it was necessary to show the results without further complicating the problem with incumbent elimination quantity and logic cuts. Hence, only the greedy-constructive elements of the HDH search are presented in the illustrative example but not the local-improvement elements. In Table 5 to Table 8, shut-down-when-below (SDWB) refers to the logistics constraints (involving both quantity and logic variables) of only allowing the shut-down of an operation-mode when below a certain threshold value. The SDWB constraint can be modeled as follows (Zyngier and Kelly, 2007):   NTtandNPLplyyXHXHXHxh tpltplplpl SDWB pltpp ..1..1,0,1, maxmax 1,   (9) where t,ppxh refers to the hold-up of physical unit pp at time-period t, t,ply is the logic variable for the set-up of logical unit pl at time-period t, max plXH corresponds to the maximum hold-up or inventory of logical unit pl and SUWB plXH is the start-up-when-below lot-size of logical pool-unit pl.
  • 29. 29 The fill-draw delay (FDD) (Table 5 to Table 8) represents the timing between the last filling sub- operation and the following drawing sub-operation. An FDD of zero indicates a standing-gauge pool- unit. In the following constraints i and j indicate the specific inlet- and outlet-ports attached to the units that are downstream and upstream of the logical pool-unit respectively. The other ipl and jpl subscripts refer to the outlet- and inlet-port on the logical pool-unit itself. NTttt|NT..t..tt,yy min pl,FDDt,ipl,joutttt,jin,jpl  , 101  (10) Constraint (10) stipulates that a draw cannot occur until after the lower FDD elapses. There is also an upper bound of FDD which indicates the maximum duration between the last filling and the following drawing of stock out of the logical pool-unit as indicated below. For a more thorough description of quantity, logic and logistics constraints encountered in inventory models of process industries the reader should refer to Kelly (2006) and to a Zyngier and Kelly (2007). NTttt|NT..t,yy max pl,FDD tt ttt,jin,jplt,ipl,jout max pl,FDD      10 1 (11) In Table 9 to Table 13 additional quantity, logic and logistics details are presented. Table 9 and Table 10 provide the semi-continuous flow logistics constraints for the outlet- and inlet-ports respectively which model flows that can either be zero or between their lower and upper bounds within a time- period. Table 11 shows the opening inventories, hold-ups or initial lot-size amounts in the pool-units at the start of schedule with initial mode-operation logic setups for whether the pool-unit is in material- operation or service A, B, C or ABC. The material balance for pool-units can be expressed as: NT..txfxfxhxh t,joutt,jint,ppt,pp 11   (12) where t,jinxf and t,joutxf refer to the flows entering or leaving physical unit pp at time-period t respectively. Table 12 specifies the timing of what we call inverse-yield orders. Inverse-yields are really recipe, intensity or proportion amounts of how much of stocks A, B and C should be mixed together in the blending process-units; these were specified by a higher-level planning system. Table 13 shows the supply amounts of A, B and C from the supply perimeter-units over the scheduling horizon.
  • 30. 30 For each of the three decomposition strategies two different demand scenarios were applied: an aggressive scenario where all demands were fixed (Table 14) (i.e., the lower and upper bounds are equal) and a conservative scenario where the demands had zero lower bounds (Table 15). Table 16 displays the problem statistics. These problems were generated and solved using XPRESS-MOSEL version 1.6.3 and XPRESS-MILP version 17.10.08 with all default settings (Gueret et al., 2002). In addition, this XPRESS-MOSEL version uses the XPRESS-MOSEL-PARALLEL which provided us with the ability to solve problems simultaneously on different CPUs. In order to limit computation times, the maximum time on any individual optimization problem (coordinator and cooperators) was limited to either 30-CPU-seconds or the time to find the first integer feasible solution (whichever was the longest). Figure 21 shows a Gantt chart with the operation modes of units throughout the scheduling horizon for the solution from the centralized strategy. The conservative scenario allows the non-fulfillment of orders since the lower bounds on the demands are zero thus allowing for an easier scheduling problem from a schedule feasibility perspective. This statement is confirmed by the results in Table 17 (aggressive scenario) and Table 18 (conservative scenario). As expected, all decomposition strategies in the conservative scenario (Figure 22) had a significantly faster convergence time, due to requiring less HDH iterations, than in the aggressive scenario (Figure 23). On the other hand, the overall objective function was significantly higher in the aggressive scenario since all demands had to be fully satisfied. In terms of the performance of the decomposition approaches the collaborative strategy found better feasible solutions than the coordinated approaches. This is partly due to the fact that the coordinator may under-estimate (sand-bag) the performance of the cooperators which does not occur in the collaborative strategy. In contrast, the collaborative strategy needed more iterations than the coordinated approaches in the more difficult aggressive scenario. It should again be highlighted that the computational performance of the coordinated strategies is enhanced by the capability of parallelization of the cooperator sub-problems which is not possible with a collaborative strategy given that a collaborative approach is performed in-series or in-priority.
  • 31. 31 As previously mentioned, the collaborative strategy assumes that the on-sites (cooperator 2) is a locally rigorous coordinator in the HDH and therefore the decisions made by the on-sites will have priority over the decisions made by the off-sites (cooperator 1). This is aligned with the notion of scheduling the bottlenecks first or focusing on the bottlenecks. The opposite collaboration strategy was attempted where the off-sites were modeled as the coordinator and therefore its decisions had top priority. This strategy failed to find a globally feasible solution which is easily explained by the fact that the off-sites does not constitute a bottleneck nor is it the true driver of the decision-making in this system. This case demonstrates that in order to successfully apply a collaborative strategy, as previously stated, it is not only necessary to identify the correct segmentation of the decision-making system but also the priority of its individual elements. The use of a partially rigorous coordinator model significantly improved the speed of the HDH as expected. In the aggressive scenario the partially rigorous coordination also provided a better globally feasible solution than the relaxed coordinator. The centralized strategy implies the use of a global rigorous model in the coordination layer. The first feasible solution took 10 and 4 times longer than the collaborative strategy in the aggressive and conservative scenarios respectively. In this example the first feasible solution of the centralized system was also the global provably-optimal one. It is interesting to note that the global optimum was also achieved by the collaborative strategy in the aggressive approach. This shows that while it is not guaranteed that the global optimum will be found by using the HDH, this heuristic may in fact be able to obtain it albeit serendipitously. 5. Conclusions The focus of this paper has been to present a heuristic which can be used to find globally feasible solutions to usually large decentralized and distributed decision-making problems when a centralized approach is not possible. A standardized nomenclature was established to better describe the communication, coordination and cooperation between two hierarchical layers of a decomposed problem. The HDH was applied to an illustrative example based on an actual industrial multi-site system
  • 32. 32 and was able to solve this problem faster than a centralized model of the same problem when using both coordinated and collaborative approaches. In addition the HDH has been contrasted with other methods structured around the notions of price/pole-directed, weight-directed and error-directed decomposition strategies. Even though the HDH is currently an error-directed method, future work will focus on devising weight-directed enhancements to this heuristic using aggregation and disaggregation rules to automatically transform the global rigorous model into both local rigorous and global relaxed/reduced/restricted models. It is expected that such an enhancement could minimize the model-mismatch or inaccuracies introduced by the contrast between public and private constraints. In essence, this is equivalent to using an exogenous (global rigorous) model supplied by the modeler or scheduling analyst and programmatically generating coordinator and cooperator endogenous models using "reduction rules". And finally, we would like to emphasize that the HDH is only a rule-of-thumb for helping with the diverse reasoning behind the coordination of essentially bi-level decomposition optimization problems. For tightly bottlenecked and/or critically resource constrained problems, attention to how the coordinator is setup and how the cooperators are separated will determine its success as is the case with all heuristic approaches. Nomenclature Sets and indices i = 1…NP number of poles k = 1...NI number of iterations pp= 1…NPP number of physical units pl = 1...NPL number of logical units sp = 1…NSP number of cooperating sub-problems t = 1...NT number of time-periods
  • 33. 33 Parameters SSL step size for lower pole-outline (bound) adjustment SSU step size for upper pole-outline (bound) adjustment wi weight parameters for the pole-outlier objective function terms wwi weight parameters for the pole-opportunity objective function terms min plXH minimum hold-up or inventory of logical unit pl max plXH maximum hold-up or inventory of logical unit pl SUWB plXH start-up-when-below lot-size of logical pool-unit pl min pl,FDD minimum fill-draw delay on logical unit pl max pl,FDD maximum fill-draw delay on logical unit pl Variables Pi,k pole-offer (from coordinator) for resource, regulation and/or register i at iteration k Pi,sp,k pole-observation for resource i at cooperating sub-problem sp at iteration k LB k,iP lower pole-outline (bound) of pole-offer i at iteration k UB k,iP upper pole-outline (bound) of pole-offer i at iteration k iPP pole-objective i   k,iP pole-outlier shortage for resource i in the coordinator at iteration k   k,iP pole-outlier surplus for resource i in the coordinator at iteration k   k,sp,iP pole-offer shortage for resource i at cooperating sub-problem sp at iteration k   k,sp,iP pole-offer surplus for resource i at cooperating sub-problem sp at iteration k   k,iPP pole-opportunity shortage for resource i in the coordinator at iteration k   k,iPP pole-opportunity surplus for resource i in the coordinator at iteration k t,jinxf flow that enters a physical unit at time-period t t,joutxf flow that leaves a physical unit at time-period t
  • 34. 34 t,ppxh hold-up of physical unit pp at time-period t t,jin,jouty logic variable for the movement from outlet-port jout to inlet-port jin at time-period t t,ply logic variable for the set-up of logical unit pl at time-period t  Lagrange multiplier in Lagrangean decomposition References Basset, M.H.; Pekny, J.F.; Reklaitis, G.V. (1996). Decomposition techniques for the solution of large-scale scheduling problems. AIChE J., 42, 3373. Cheng, R.; Forbes, J.F.; Yip, W.S. (2007). Price-driven coordination method for solving plant- wide MPC problems. J. Proc. Control, 17, 429. Dantzig, G.B.; Wolfe, P. (1960). Decomposition principle for linear programs. Oper. Res., 8, 1. Dogan, M.E.; Grossmann, I.E. (2006). A decomposition method for the simultaneous planning and scheduling of single-stage continuous multiproduct plants. Ind. Eng. Chem. Res., 45, 299. Forbes, J.F.; Marlin, T.E. (1994). Model accuracy for economic optimizing controllers: the bias update case. Comput. Chem. Eng., 18, 497. Geoffrion, A.M. (1972). Generalized Benders decomposition. J. Optim. Theory and Appl., 10, 237. Gueret, C.; Prins, C.; Sevaux; Heipcke S. (revisor and translator) (2002). Applications of Optimization with Xpress-MP, Dash Optimization, Blisworh, Northan, UK.. Jemai, Z.; Karaesmen, F. (2006). Decentralized inventory control in a two-stage capacitated supply chain. To appear in IIE Transactions. Jörnsten, K.; Leisten, R. (1995). Decomposition and iterative aggregation in hierarchical and decentralized planning structures. Eur. J. Oper. Res., 86, 120.
  • 35. 35 Jose, R.A. (1999). Ph.D. thesis, University of Pennsylvania, PA, USA. Jose, R.A.; Ungar, L.H. (1998). Auction-Driven Coordination for Plantwide Control. Foundations of Computer-Aided Process Operation, Snowbird, USA. Jose, R.A.; Ungar, L.H. (2000). Pricing interprocess streams using slack auctions. AIChE J., 46, 575. Karuppiah, R.; Grossmann, I.E. (2006). A Lagrangean based branch-and-cut algorithm for global optimization of nonconvex mixed-integer nonlinear programs with decomposable structures. Submitted to J. Global Optim. Kelly, J.D. (2002). Chronological Decomposition Heuristic for Scheduling: Divide and Conquer Method. AIChE J., 48, 2995. Kelly, J.D. (2004a). Production Modeling for Multimodal Operations. Chemical Engineering Progress, February, 44. Kelly, J.D. (2004b). Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule Approach. Honeywell Internal Technical Report. Kelly, J.D.; Mann, J.L. (2004). Flowsheet Decomposition Heuristic for Scheduling: a Relax-and- Fix Method. Comput. Chem. Eng., 28, 2193. Kelly, J.D. (2005a). Modeling production-chain information. Chemical Engineering Progress, February, 28. Kelly, J.D. (2005b). The Unit-Operation-Stock Superstructure (UOSS) and the Quantity-Logic- Quality Paradigm (QLQP) for Production Scheduling in the Process Industries. Proceedings of the Multidisciplinary Conference on Scheduling Theory and Applications (MISTA), 1, 327.
  • 36. 36 Kelly, J.D. (2006). Logistics: the missing link in blend scheduling optimization. Hydrocarbon Processing, June, 45. Lu, J. (2003). Challenging control problems and emerging technologies in enterprise optimization. Control Eng. Practice, 11, 847. Wilkinson, S.J. (1996). Ph.D. thesis, Imperial College of Science, Technology and Medicine, London, UK. Wu, D.; Ierapetritou, M.G. (2003). Decomposition approaches for the efficient solution of short- term scheduling problems. Comput. Chem. Eng., 27, 1261 Zhang, N.; Zhu, X.X. (2006). Novel modeling and decomposition strategy for total site optimization. Comput. Chem. Eng., 30, 765. Zyngier, D. (2006). Ph.D. thesis, McMaster University, Hamilton, ON, Canada. Zyngier, D.; Kelly, J.D. (2007). Multi-Product Inventory Logistics Modeling in the Process Industries. Accepted for publication as a chapter in the book from the DIMACS and ExxonMobil Workshop on Computational Optimization and Logistics Challenges in the Enterprise (COLCE), eds. K.C. Furman and I.E. Grossmann.
  • 37. 37 Coordination Cooperation Feedforward Feedback Figure 1. Bi-level layers involved in decomposition.
  • 38. 38 Coordinated Collaborative Productivity Defects Centralized Competitive Figure 2. Defects versus productivity trade-off curves for different reasoning isotherms.
  • 39. 39 Pole Coordinator Price Coordinator Cooperator 2 Cooperator 3Cooperator 1 Figure 3. General structure of decomposed problems.
  • 41. 41 Update Lagrange Multipliers () Pole-offers Upper Bound Calculation (UB)   Cooperator 2 Cooperator 3Cooperator 1 UB Figure 5. Lagrangean decomposition.
  • 42. 42 Determination of Reference Poles Prices Pole-obstacles, offsets, outages Pole-offers Cooperator 2 Cooperator 3Cooperator 1 Price Adjustment Figure 6. Auction-based decomposition (Jose, 1999).
  • 43. 43 Prices Pole-observations Price-elasticity (sensitivity) No Pole Coordination Cooperator 2 Cooperator 3Cooperator 1 Price Adjustment Figure 7. Auction-based decomposition with price-elasticity (Cheng et al., 2006).
  • 44. 44 Pole-observations Aggregated Model Pole-offers Cooperator 2 Cooperator 3Cooperator 1 No Price Coordination Figure 8. Iterative aggregation/disaggregation decomposition (Jörnsten and Leisten, 1995).
  • 45. 45 Pole-obstacles, offsets, outages Relaxed Global Problem (Planning) Cooperator 1 (Scheduling) No Price Coordination Pole-offers Cooperator 2 (Scheduling) Cooperator 3 (Scheduling) Figure 9. Planning to scheduling decomposition (Bassett et al., 1996).
  • 46. 46 Relaxed Global Problem Cooperator 2 Cooperator 3Cooperator 1 No Price Coordination Pole-sensitivity (yields, inverse-yields) Pole-offers Figure 10. Yield updating decomposition (Zhang and Zhu, 2006).
  • 47. 47 Continuous Refinement Yes Stop No Re-adjust pole-outlines in the coordinator based on cooperator pole-obstacles, offsets and outages. Are all cooperators feasible? No Yes Exclude current solution from coordinator keeping current or different pole-outlines. Solve coordinator problem. Conflict Resolution (Equations 3a, 3b) Consensus Reconciliation (Equations 4,5,6) pole-offers Also minimize deviation of pole-offers from the pole-objectives. Challenge Research (Equations 7, 8) Start Is another solution desired? Yes Do pole-objectives exist? No Solve cooperator sub-problems. pole-offers pole-obstacles, offsets and outages Minimize deviation of coordinator pole-offers from cooperator pole- observations of the previous iteration. pole-origins Figure 11. Overview of proposed HDH algorithm.
  • 48. 48 0  R10  10 0  R20  5 Coordinator Cooperator 1 Cooperator 2 0  R11  5 0  R21  10 0  R12  0 0  R22  10 (a) 0  R10  10 0  R20  5 R10 + R20  10 Coordinator Cooperator 1 Cooperator 2 0  R11  5 0  R21  10 0  R12  0 0  R22  10 (b) Cooperator 1 Cooperator 2 0  R11  5 For R11  5: R21  2* R11 For R11 > 5: R21  0.5* R11 0  R12  20 0  R22  20 0  R10  10 0  R20  15 R1c + R20  20 (c) Figure 12. Conflict resolution – three motivating examples.
  • 49. 49 CooperationCooperation Coordinator Cooperators Pole-offers Pole-observations Pole-obstacles, offsets, outages + - Pole-objectives Feedforward Engine Feedback Engine Figure 13. Model predictive control structure of the HDH.
  • 50. 50 Plan Perform Perfect Orders Objects Feedback Feedforward Feedback System Dynamics Figure 14. The plan-perform-perfect-loop of the HDH with orders-to-objects system dynamics.
  • 51. 51 Coordinator Cooperator 1 Cooperator 2 (a) Cooperator 1 Cooperator 2 Coordinator (b) Cooperator 1 Cooperator 2 Coordinator (c) Coordinator Cooperator 1 Cooperator 2 (d) Figure 15. Joint feasible regions of decomposed problem with different coordinator models: (a) global rigorous model; (b) global restricted model; (c) global reduced model; (d) global relaxed model.
  • 52. 52 Pole-obstacles, offsets and outages Coordinator Model Cooperator 2 Cooperator 3Cooperator 1 Pole-offers Weight Adjustment Figure 16. Hierarchical decomposition heuristic.
  • 54. 54 S11 S12 S13 P1 P12 T11 T12 T13 T14 T15 T16 Figure 18. Physical connectivity of local off-site sub-problem (cooperator 1).
  • 55. 55 S21 P1 D21 D22 B21 B22 T21 T22 T23 T24 T25 T26 P12 P22 Figure 19. Physical connectivity of local on-site sub-problem (cooperator 2).
  • 56. 56 Rigorous Cooperator 1 Rigorous Cooperator 2 No Cooperator 1 No Cooperator 2 (a) No Cooperator 1 Rigorous Cooperator 2 Rigorous Cooperator 1 No Cooperator 2 (b) Relaxed Cooperator 1 Relaxed Cooperator 2 Rigorous Cooperator 1 Rigorous Cooperator 2 (c) Relaxed Cooperator 1 Rigorous Cooperator 2 Rigorous Cooperator 1 No Cooperator 2 (d) Figure 20. Decomposition strategies: (a) centralized (no decomposition); (b) collaborative; (c) coordinated – relaxed coordinator; (d) coordinated – partially rigorous coordinator.
  • 57. 57 B21 B22 P12 A A A P1 B B P22 A A A T26 T25 T24 A T23 A B T22 T21 A B A B T12 B C B C B A T11 C A C B C B T16 C B A B T15 C A C B C A T14 A A B A B A T13 1 2 3 4 5 6 7 8 ABC ABC B A A B ABC ABC B C B B A CB B Figure 21. Gantt chart showing mode and material-operations – solution from centralized strategy.
  • 58. 58 0 5 10 15 20 25 30 35 40 45 50 0 2 4 6 8 10 12 Iterations Pole-Obstacles. Collaborative ($898.60) Coordinated - Relaxed ($894.60) Coordinated - Partially Rigorous ($889.60) . Figure 22. Total pole-obstacles in the coordinated and collaborative strategies (conservative scenario).
  • 59. 59 0 5 10 15 20 25 30 35 40 45 50 0 2 4 6 8 10 12 Iterations Pole-Obstacles. Collaborative ($935.08) Coordinated - Relaxed ($905.08) Coordinated - Partially Rigorous ($929.20) . Figure 23. Total pole-obstacles in the coordinated and collaborative strategies (aggressive scenario).
  • 60. 60 Table 1. Examples of organizational and managerial hierarchies. Top Layer Middle Layer Bottom Layer Planner Master-Scheduler Schedulers Board of Directors Chief-Executive Executives Chief-Executive Executive Directors Executive Director Managers Director Manager Workers (white-collar) Manager Supervisor / Lead Workers (blue-collar), i.e., Operators, Laborers, Developers, Tradesmen, Journeymen, etc. Supervisor / Lead Workers (blue-collar) Equipment (Unit), Services (Utilities), Tools (Utensils), Materials (Stocks) (UOSS) Owner General Contractor Sub-Contractors Administrator Doctor Nurses, Orderlies, Nursing Assistants, etc. Sargent Corporal Private, Soldier, Troop, etc. Manager Coach Players, Tem-Member Principal Teacher Students
  • 61. 61 Table 2. Results – motivating example 1. Iteration Resource- Pole-Outlines Pole-Offers Pole-Obstacles Pole-Obstacles Protocol Coordinator Cooperator 1 Cooperator 2 1 R1 0  R10  10 10 -5 -10 R2 0  R20  5 5 0 0 2 R1 -10  R10  0 0 0 0 R2 0  R20  5 5 0 0
  • 62. 62 Table 3. Results – motivating example 2. Iteration Resource- Pole-Outlines Pole-Offers Pole-Obstacles Pole-Obstacles Protocol Coordinator Cooperator 1 Cooperator 2 1 R1 0  R10  10 5 0 -5 R2 0  R20  5 5 0 0 2 R1 -5  R10  5 5 0 -5 R2 0  R20  5 5 0 0 3 R1 -10  R10  0 0 0 0 R2 0  R20  5 5 0 0
  • 63. 63 Table 4. Results – motivating example 3. Iteration Resource- Pole-Outlines Pole-Offers Pole-Obstacles Pole-Obstacles Protocol Coordinator Cooperator 1 Cooperator 2 1 R1 0  R10  10 5 0 0 R2 0  R20 15 15 -5 0 2 R1 0  R10  10 10 0 0 R2 -5  R20  10 10 -5 0 3 R1 0  R10  10 10 0 0 R2 -10  R20  5 5 0 0
  • 64. 64 Table 5. Model parameters – cooperator 1. Pools Lot-Size (kMT) SDWB FDD Continuous Charge-Size (kMT / h) LB UB (kMT) (h) Processes LB UB ( min plXH ) ( max plXH ) ( SUWB plXH ) ( min pl,FDD ) T11 0 9 0 0 T12 0 4.2 0 0 T13 2 12.5 4 - T14 2 12.7 4.05 0 T15 2 18.5 5.35 0 T16 2 18.5 6.6 0
  • 65. 65 Table 6. Model parameters – cooperator 2. Pools Lot-Size (kMT) SDWB FDD Continuous Charge-Size (kMT / h) LB UB (kMT) (h) Processes LB UB ( min plXH ) ( max plXH ) ( SUWB plXH ) ( min pl,FDD ) T21 1 6.2 - - B21 0.1 0.28 T22 1 6.2 - - B22 0.1 0.3 T23 1 6.2 1.5 - P1 0.25 0.32 T24 0 3.9 0 - P22 0.25 0.35 T25 0 0.35 - - T26 0 0.35 - -
  • 66. 66 Table 7. Model parameters – relaxed coordinator. Pools Lot-Size (kMT) SDWB FDD Continuous Charge-Size (kMT / h) LB UB (kMT) (h) Processes LB UB ( min plXH ) ( max plXH ) ( SUWB plXH ) ( min pl,FDD ) T11 0 9 - - B21 0.1 0.28 T12 0 4.2 - - B22 0.1 0.3 T13 0 12.5 - - P1 0.25 0.32 T14 0 12.7 - - P22 0.25 0.35 T15 0 18.5 - - T16 0 18.5 - - T21 0 6.2 - - T22 0 6.2 - - T23 0 6.2 - - T24 0 3.9 - - T25 0 0.35 - - T26 0 0.35 - -
  • 67. 67 Table 8. Model parameters – partially rigorous coordinator. Pools Lot-Size (kMT) SDWB FDD Continuous Charge-Size (kMT / h) LB UB (kMT) (h) Processes LB UB ( min plXH ) ( max plXH ) ( SUWB plXH ) ( min pl,FDD ) T11 0 9 - - B21 0.1 0.28 T12 0 4.2 - - B22 0.1 0.3 T13 0 12.5 - - P1 0.25 0.32 T14 0 12.7 - - P22 0.25 0.35 T15 0 18.5 - - T16 0 18.5 - - T21 1 6.2 - - T22 1 6.2 - - T23 1 6.2 1.5 - T24 0 3.9 0 - T25 0 0.35 - - T26 0 0.35 - -
  • 68. 68 Table 9. Model parameters – semi-continuous flow constraints on outlet-ports. Source- Mode-Operation Flows (kMT / h) Port LB UB S11 A,B,C 0.04 0.35 S12 A,B,C 0.02 0.45 S13 A,B,C 0 2 S21 A,B,C 0 0.25 T11 A,B,C 0 1.5 T12 A,B,C 0 1.5 T13 A,B,C 0 1.5 T14 A,B,C 0 1.5 T15 A,B,C 0 1.5 T16 A,B,C 0 1.5 T21 A,B,C 0 0.28 T22 A,B,C 0 0.28 T23 A,B,C 0 0.28 T24 A,B,C 0 0.38 T25 ABC 0 0.28 T26 ABC 0 0.3 B21 ABC 0 0.28 B22 ABC 0 0.3 P1 A,B,C 0 0.25 P12 A,B,C 0 0.35 P22 A,B,C - -
  • 69. 69 Table 10. Model parameters – semi-continuous flow constraints on inlet-ports. Destination- Mode-Operation Flows (kMT / h) Port LB UB D21 ABC 0.06 0.28 D22 ABC 0.078 0.3 T11 A,B,C 0 2 T12 A,B,C 0 2 T13 A,B,C 0 2 T14 A,B,C 0 2 T15 A,B,C 0 2 T16 A,B,C 0 2 T21 A,B,C 0 0.28 T22 A,B,C 0 0.28 T23 A,B,C 0 0.28 T24 A,B,C 0 0.15 T25 ABC 0 0.28 T26 ABC 0 0.3 B21 A,B,C 0 0.28 B22 A,B,C 0 0.29 P1 A,B,C 0 0.35 P12 A,B,C 0 0.35 P22 A,B,C - -
  • 70. 70 Table 11. Initial lot-sizes for pool-units. Pool-Units Initial Lot-Size Initial Material (kMT) T11 0 B T12 0 B T13 10.579 A (cooperators) T14 12.675 B T15 18.051 B T16 2.484 B T21 5.251 A T22 5.52 B T23 5.608 B T24 0 B T25 0 ABC T26 0 ABC
  • 71. 71 Table 12. Inverse-yield orders for blender process-units. Blender- Inlet- Start-Time End-Time Inverse-Yield Unit Port (h) (h) LB UB B21 A 0 24 0 0 24 48 0.95 1 48 192 0 0 B 0 24 0.95 1 24 48 0 0 48 192 0.95 1 0 192 0 0 C 0 192 0 0 B22 A 0 24 0.01781 0.01968 24 48 0.10124 0.11189 48 72 0.2804 0.30992 72 120 0 0 120 144 0.8282 0.91538 144 168 0.40206 0.44439 168 192 0 0 B 0 24 0.93219 1 24 48 0.84877 0.93811 48 72 0.6696 0.74008 72 120 0.95 1 120 144 0.1218 0.13462 144 168 0.54794 0.60561 168 192 0.95 1 C 0 192 0 0
  • 72. 72 Table 13. Supply profile for the scheduling horizon Supply Start-Time End-Time Rates (kMT / h) Perimeter-Unit (h) (h) LB UB (Mode) S11 (A) 48 164 0.1219 0.1219 S12 (B) 128 152 0.375 0.375 152 168 0.3125 0.3125 168 192 0.375 0.375 S13 (B) 78 96 1.47222 1.47222 126 138 0.41667 0.41667 S21 (B) 25 72 0.20426 0.20426 121 148 0.22833 0.22833 170 180 0.2055 0.2055
  • 73. 73 Table 14. Demand profile for the scheduling horizon – aggressive approach Demand Start-Time End-Time Rates (kMT / h) Perimeter-Unit (h) (h) LB UB (Mode) D21 0 48 0.18353 0.18353 (ABC) 48 72 0.18345 0.18345 72 96 0.18353 0.18353 96 120 0.18087 0.18087 120 144 0.18353 0.18353 144 168 0.20156 0.20156 168 192 0.18353 0.18353 D22 0 24 0.28546 0.28546 (ABC) 24 48 0.28061 0.28061 48 72 0.28372 0.28372 72 96 0.27328 0.27328 96 120 0.27465 0.27465 120 144 0.27525 0.27525 144 168 0.27639 0.27639 168 192 0.26302 0.26302
  • 74. 74 Table 15. Demand profile for the scheduling horizon – conservative approach Demand Start-Time End-Time Rates (kMT / h) Perimeter-Unit (h) (h) LB UB (Mode) D21 0 48 0 0.18353 (ABC) 48 72 0 0.18345 72 96 0 0.18353 96 120 0 0.18087 120 144 0 0.18353 144 168 0 0.20156 168 192 0 0.18353 D22 0 24 0 0.28546 (ABC) 24 48 0 0.28061 48 72 0 0.28372 72 96 0 0.27328 96 120 0 0.27465 120 144 0 0.27525 144 168 0 0.27639 168 192 0 0.26302
  • 75. 75 Table 16. Problem statistics (presolved values in brackets). Model Constraints Continuous Variables Non-Zero Coefficients Binary Variables Global rigorous coordinator 7234 (3010) 4450 (2187) 29780 (9484) 1170 (951) Global partially rigorous coordinator 6809 (3134) 4450 (2406) 28056 (9996) 1170 (1045) Global relaxed coordinator 6786 (2771) 4423 (2194) 27731 (8659) 1154 (923) Local rigorous on-sites 3133 (1287) 2228 (1013) 12971 (4383) 607 (394) Local rigorous off-sites 3598 (280) 1991 (266) 13314 (939) 488 (56)
  • 76. 76 Table 17. Results for the aggressive case. Iterations Time Objective Function (CPU s) ($) 1 6301.0 935.08 11 626.2 935.08 9 401.7 905.08 4 195.7 929.2 Approach Centralized Collaborative Coordinated - Relaxed Coordinated - Partially Rigorous
  • 77. 77 Table 18. Results for the conservative case. Iterations Time Objective Function (CPU s) ($) 1 1488.0 935.08 7 386.2 898.6 7 333.6 894.6 2 78.5 889.6 Approach Centralized Collaborative Coordinated - Relaxed Coordinated - Partially Rigorous