SlideShare a Scribd company logo
1 of 16
IBM Research: Computing as a Service 
Combinatorial Problem Solving in C10 
How to write programmable solvers declaratively 
Vijay Saraswat 
<firstname>@<lastname>.org 
IBM TJ Watson 
Sep 9, 2014 
© 2005 IBM Corporation Computing as a Service 1
CCP Research Programmes 
Logic Programming 
 In the early 80s, Colmerauer / 
Kowalski and colleagues developed 
definite clause logic programming 
based on a procedural interpretation 
of proofs (“right hand side” 
computation, backward chaining) 
– Operationally one gets non-deterministic 
user-defined recursive 
procedures, accumulating constraints 
over Herbrand terms. 
– Logically, given an atom p(X1, …, Xn), 
the system generates bindings X1=t1, 
…, Xn=tn sufficient to establish the truth 
of the atom, given the program 
clauses. 
– Multiple (implicitly disjunctive) answers 
may be returned. 
conjunction disjunction 
© 2009 IBM Corporation IBM Research 
2 
(Goal) G ::= H|G,G|G;G 
(Program) P ::= H:-G|P,P 
(Atom) H ::= p(t1,…,tn) 
X1=t1, …, Xn=tn ?- p(X1,…,Xn) 
Answer returned by Query posed by user 
system
CCP Research Programmes 
Prolog 3 and Constraint Logic Programming 
 At POPL 87, Jaffar and Lassez 
showed how this framework could 
be extended to (essentially) arbitrary 
constraints, implemented by an 
embedded (black box) constraint 
solver 
– Atomic formulas drawn from some 
underlying constraint theory (over a 
vocabulary disjoint from the user-defined 
predicates E) 
 CLP(R) is an exemplar, permitting 
linear arithmetic constraints over R. 
 Prolog III can also be viewed as an 
instance, over a different, rich 
constraint system 
© 2009 IBM Corporation IBM Research 
3 
G ::= c 
c1, …, ck ?- p(X1,…,Xn)
CCP Research Programmes 
However, … 
 This framework is not flexible 
enough to permit the user to specify 
propagation rules, or search 
strategies in logic. 
 Often constraint solvers are 
incomplete, or highly combinatorial 
and user-defined propagation rules 
are critical (cf cc(FD)) 
 Similarly often critical are search 
strategies, e.g. run all propagation 
rules, then expand (non-deterministically) 
Research 
the atom with the 
fewest choices left (cf CHIP) 
IBM © 2009 IBM Corporation 4 
X in {1,2,3}, X!=Y, Y=2 
?- X in {1,3}
CCP Research Programmes 
Claim: (Timed) CCP provides that framework 
 Concurrent Constraint Programming is a 
logical framework based, dually, on the 
notion of Agents, not Goals, on “left hand 
side” computation (“forward chaining”) 
 Operationally one gets non-deterministic 
user-defined recursive procedures, 
accumulating constraints in a shared 
store. 
 Implicative agents (if c D) trigger on the 
presence of a constraint c in the store, 
and take further actions. 
(Agent) D ::= E|c|D,D 
|D;D|if c D 
(Program) P ::= E-:D|P,P 
(Atom) E ::= p(t1,…,tn) 
p(X1,…,Xn) ?- c1 ; … ; ck 
Entailed constraints 
determined by 
system 
Agent proposed by 
user 
Research 
 Disjunction is “angelic” – on the LHS, if B 
entails A, then (A;B) is identical to A, i.e 
IBM the more general solution (A) is kept. 
© 2009 IBM Corporation 5 
X in {1,2,3}, X!=Y, Y=2 
?- X in {1,3}
CCP Research Programmes 
Example: Map Coloring 
© 2009 IBM Corporation IBM Research 
6 
is specializable to individual backend solvers, so they can control what form constraints end 
up in. In particular, MiniZinc allows the specification of global constraints by decomposition. 
2 Basic Modelling in MiniZinc 
In this section we introduce the basic structure of a MiniZinc model using two simple exam-ples. 
2.1 Our First Example 
Figure 1: Australian states. 
As our first example, imagine that we wish to colour a map of Australia as shown in 
Figure 1. It is made up of seven different states and territories each of which must be given 
a colour so that adjacent regions have different colours. 
We can model this problem very easily in MiniZinc. The model is shown in Figure 2. The 
first line in the model is a comment. A comment starts with a ‘%’ which indicates that the 
rest of the line is a comment. MiniZinc has no begin/ end comment symbols (such as C’s / * 
and * / comments). 
The next part of the model declares the variables in the model. The line 
3 
class OzMap(N:Int) { 
type Colors=1..N. 
enum States={wa,nt,sa,nsw,v,t,q}. 
X:Map[States,Colors]. 
agent map { 
X(wa)!= X(nt), X(wa)!=X(sa), 
X(nt)!= X(sa), X(nt)!=X(q), 
X(sa)!= X(q), X(sa)!=X(nsw), 
X(sa)!=X(v), 
X(q) != X(nsw),X(nsw)!=X(v), 
all (i in X.domain) 
choose(X(i), 0, X(i).values) 
} 
agent choose[T](X:T, i:Int, v:Rail[T]){ 
if (i < V.size) 
Note: type-generic choose 
X=v(i); 
choose(X, i+1, v) 
} 
} 
O=new OzMap(3), O.map |- 
X(wa)=1,X(nt)=2, X(sa)=3, 
X(qa)=1, X(nsw)=2, X(v)=1, 
X(t)=1 
? 
But no control between propagation and choice!
CCP Research Programmes 
Another example: Zebra 
© 2009 IBM Corporation IBM Research 
7 
class Zebra { 
enum Nationalities = {english, spanish, ukrainian, norwegian, japanese}. 
enum Colors = {red, green, ivory, yellow, blue}. 
enum Animals = {dog, fox, horse, zebra, snails}. 
enum Drinks = {coffee, tea, milk, orangeJuice, water}. 
enum Cigarettes = {oldGold, kools, chesterfields, luckyStrike, parliaments}. 
type Houses = Int(0,4). 
vars: Array(0..4,0..4)[Houses]. 
Nation = variables(0), Color = variables(1), Animal = variables(2), 
Drink = variables(3), Smoke = variables(4). 
agent rightof(h1:Houses, h2:Houses){ h1=h2+1} 
agent nextto(h1:Houses, h2:Houses) { rightof(h1,h2) ; rightof(h2,h2)} 
agent middle(h:Houses) {h=3} 
agent left(h1:Houses) {h=1} 
agent constraints { 
alldifferent(Nation), alldifferent(Color), alldifferent(Animal), 
alldifferent(Drink), alldifferent(Smoke), 
Nation(english) = Color(red), Nation(spaniard) = Animal(dog), 
Drink(coffee) = Color(green), Nation(ukrainian) = Drink(tea), 
rightof(Color(green), Color(ivory)), 
…}
CCP Research Programmes 
Zebra – but the “control” is programmable 
agent alldifferent[T](A:Array[T]) { 
all (i in A.domain, j in A.domain{j !=i}) 
A(j) != A(i) // assuming != available as a constraint on two variables 
} 
© 2009 IBM Corporation IBM Research 
8 
// Alternate 
agent alldifferent[T](A:Array[T]) { 
all (i in A.domain, j in A.domain{j !=i}) 
all (k in A(i).domain) 
if (A(i) = k) A(j) != k // removes k from A(j)’s domain 
} 
alldifferent/1 – a “global constraint” – is just a user-defined propagator. 
If A(i)=k (for any value k and index i) then k is removed from the domain of all 
other variables A(j).
CCP Research Programmes 
But how do we ensure propagation before choice? 
 Use time – Timed CCP. 
 TCC is obtained by extending CC 
“uniformly” across time. A CC 
computation is run at time t (starting with 
t=0) till quiescence. Then all agents A 
s.t. the current time instant has the agent 
next A are collected, and executed at 
the next time instant. 
– Thus TCC provides a logical way to 
arrange executions in a total order. 
 In TCC, the store may be changed non-monotonically 
between time instants. 
Research 
 This is not possible in Punctuated CCP 
– here all constraint tells are done within 
an always, hence constraints persist 
IBM across time. 
© 2009 IBM Corporation 9 
(Agent) D ::= next c D 
(Program) P ::= E-:D|P,P 
(Atom) E ::= p(t1,…,tn) 
p(X1,…,Xn) ?- 
(c1 
1 
,next(c1 
2,next(…c1 
m 
1)…)); 
…; 
(ck 
1 
,next(ck 
2, next(…ck 
m 
k)…)) 
Entailed constraints, 
across time 
determined by the 
system 
Agent proposed by 
user 
The result is (c1 
m 
1; …; ck 
m 
k).
CCP Research Programmes 
Zebra – but the “control” is programmable 
© 2009 IBM Corporation IBM Research 
10 
agent solve { 
(I,J) = argmin((i,j:vars.domain)=>values(vars(i,j)).size), 
if (values(vars(I,J)).size > 1) next { 
always choose(vars(I,J)), 
next solve() 
} 
} 
values is the only “primitive” indexical – returns a rail of values for its argument 
variable. 
solve implements the strategy of alternating between time instants in which propagation 
happens, and time instants in which the decision is made of the variable to split. It 
terminates when no more variables are left to split. 
(I,J) is the index s.t. values(var(I,J)).size is minimized. If there are k > 1 
values, choose (i.e. branch disjunctively k-ways) in the next time instant. 
This will automatically cause propagation (in that time instant).
CCP Research Programmes 
Zebra – but the “control” is programmable 
© 2009 IBM Corporation IBM Research 
11 
public def agent main(String[Rail]) { 
z = new Zebra, 
always z.constraints, 
always all (i,j in z.vars.domain) 
if (vars(i.j).size <=1) choose(vars(I,J)) 
next z.solve 
} 
The main method. Creates a new Zebra problem, asserts its constraints in all 
time instants, and sets up a propagator that forces the value of any variable 
as soon as it has only one value left in its extent. 
Also sets up the solve agent to alternate between propagation and choice
CCP Research Programmes 
(Aside: In fact RCC gives you CCP+CLP and more) 
 Much richer capabilities in RCC, while 
staying within the paradigm 
– Fully recursive goals (CLP) are available. 
– Agents with deep guards permit triggers to 
be recursively defined 
– Goals with deep guards permit conditional 
recursive augmentation of the store, for the 
purposes of answering the goal 
– Universal goals permit parametric goal 
solving 
© 2009 IBM Corporation IBM Research 
12 
(Agent) D ::= E|c|D,D 
|D;D|if G D 
|all X D 
|some X D 
(Goal) G ::= H|c|G,G 
|G;G|if D G 
|all X D 
|some X D 
(Program) P ::= E-:D|P,P 
(Atom) E ::= p(t1,…,tn) 
p(X1,…,Xn) ?- c1 ; … ; ck
CCP Research Programmes 
Crossgrams 
 A puzzle suggested to me by Gopal Raghavan. 
 Find words w in the English language which are such that for each 
position I in w there is a distinct anagram of w starting with the letter 
at the i’th position. 
– Example: emits – the corresponding anagrams are mites, items, 
times, smite. 
 Here we consider the simpler version that drops the distinctness 
requirement (just to keep the program slightly smaller). 
© 2009 IBM Corporation IBM Research 
13
CCP Research Programmes 
Key to solution 
 Given a list of N dictionary words W, generate N facts 
word(W, L, S, Ws) where 
– L is the first character of W, 
– Ws is a (possibly non-English word) anagram of W in which 
all letters are in increasing sort order, and 
– S is the first letter of Ws. 
– E.g. word(emits, e, e, eimst) / word(items, i, 
e, eimst) / word(mites, m, e, eimst), … 
 With these clauses, crossgrams can be generated 
quickly by backtracking (assuming facts are indexed 
appropriately, as they are in many Prolog systems). 
© 2009 IBM Corporation IBM Research 
14
CCP Research Programmes 
Crossgrams: illustrates use of if D G goals 
© 2009 IBM Corporation IBM Research 
15 
class Crossgram { 
@Gentzen def word(Atom, Char, Char, Atom). 
goal crossgrams(Dict: List[Atom]) = R!{ 
if (all (W in Dict) { 
W = name(Wls), 
Wsls = Wls.msort(), 
word(W, Wls.head, Wsls.head, name(Wsls)) 
}) 
words(R) 
} 
goal words(R:List[Atom]) { 
word(A,C,C,Ws), A=name(Wls), 
R=List(A, Wls.tail.map((L:Char)=>W!{word(W,L,_,Ws)})) 
}} 
Assert word/4 
constraints on the fly, 
based on the dictionary, 
add them to the store. 
Solve goal in context of generated constraints 
This could fail, triggering 
backtracking.
CCP Research Programmes 
Conclusion 
 (Timed) CCP provides a dual logic programming 
framework which is much closer to more conventional 
“constraint programming” than CLP. 
 In particular, TCC permits the use of user-defined 
propagators and search procedures. 
 R[T]CC considerably enriches programming power, 
beyond TCC. 
© 2009 IBM Corporation IBM Research 
16

More Related Content

What's hot

Deep Learning through Examples
Deep Learning through ExamplesDeep Learning through Examples
Deep Learning through ExamplesSri Ambati
 
Convolutional Neural Network
Convolutional Neural NetworkConvolutional Neural Network
Convolutional Neural NetworkJunho Cho
 
Language translation with Deep Learning (RNN) with TensorFlow
Language translation with Deep Learning (RNN) with TensorFlowLanguage translation with Deep Learning (RNN) with TensorFlow
Language translation with Deep Learning (RNN) with TensorFlowS N
 
160205 NeuralArt - Understanding Neural Representation
160205 NeuralArt - Understanding Neural Representation160205 NeuralArt - Understanding Neural Representation
160205 NeuralArt - Understanding Neural RepresentationJunho Cho
 
Tutorial on convolutional neural networks
Tutorial on convolutional neural networksTutorial on convolutional neural networks
Tutorial on convolutional neural networksHojin Yang
 
Transfer Learning: An overview
Transfer Learning: An overviewTransfer Learning: An overview
Transfer Learning: An overviewjins0618
 
Deep Learning Tutorial
Deep Learning Tutorial Deep Learning Tutorial
Deep Learning Tutorial Ligeng Zhu
 
Deep Learning Cases: Text and Image Processing
Deep Learning Cases: Text and Image ProcessingDeep Learning Cases: Text and Image Processing
Deep Learning Cases: Text and Image ProcessingGrigory Sapunov
 
NIPS2017 Few-shot Learning and Graph Convolution
NIPS2017 Few-shot Learning and Graph ConvolutionNIPS2017 Few-shot Learning and Graph Convolution
NIPS2017 Few-shot Learning and Graph ConvolutionKazuki Fujikawa
 
Predicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman networkPredicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman networkKazuki Fujikawa
 
ECCV2010: feature learning for image classification, part 4
ECCV2010: feature learning for image classification, part 4ECCV2010: feature learning for image classification, part 4
ECCV2010: feature learning for image classification, part 4zukun
 
Deep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical MethodologyDeep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical MethodologyJason Tsai
 
Deep learning: what? how? why? How to win a Kaggle competition
Deep learning: what? how? why? How to win a Kaggle competitionDeep learning: what? how? why? How to win a Kaggle competition
Deep learning: what? how? why? How to win a Kaggle competition317070
 
Deep Neural Networks for Multimodal Learning
Deep Neural Networks for Multimodal LearningDeep Neural Networks for Multimodal Learning
Deep Neural Networks for Multimodal LearningMarc Bolaños Solà
 
Convolutional neural networks for image classification — evidence from Kaggle...
Convolutional neural networks for image classification — evidence from Kaggle...Convolutional neural networks for image classification — evidence from Kaggle...
Convolutional neural networks for image classification — evidence from Kaggle...Dmytro Mishkin
 
Cheat sheets for AI
Cheat sheets for AICheat sheets for AI
Cheat sheets for AINcib Lotfi
 
Fcv learn yu
Fcv learn yuFcv learn yu
Fcv learn yuzukun
 
Deep Learning in Computer Vision
Deep Learning in Computer VisionDeep Learning in Computer Vision
Deep Learning in Computer VisionSungjoon Choi
 
Learn to Build an App to Find Similar Images using Deep Learning- Piotr Teterwak
Learn to Build an App to Find Similar Images using Deep Learning- Piotr TeterwakLearn to Build an App to Find Similar Images using Deep Learning- Piotr Teterwak
Learn to Build an App to Find Similar Images using Deep Learning- Piotr TeterwakPyData
 

What's hot (20)

Deep Learning through Examples
Deep Learning through ExamplesDeep Learning through Examples
Deep Learning through Examples
 
Convolutional Neural Network
Convolutional Neural NetworkConvolutional Neural Network
Convolutional Neural Network
 
Language translation with Deep Learning (RNN) with TensorFlow
Language translation with Deep Learning (RNN) with TensorFlowLanguage translation with Deep Learning (RNN) with TensorFlow
Language translation with Deep Learning (RNN) with TensorFlow
 
160205 NeuralArt - Understanding Neural Representation
160205 NeuralArt - Understanding Neural Representation160205 NeuralArt - Understanding Neural Representation
160205 NeuralArt - Understanding Neural Representation
 
Tutorial on convolutional neural networks
Tutorial on convolutional neural networksTutorial on convolutional neural networks
Tutorial on convolutional neural networks
 
Transfer Learning: An overview
Transfer Learning: An overviewTransfer Learning: An overview
Transfer Learning: An overview
 
Deep Learning Tutorial
Deep Learning Tutorial Deep Learning Tutorial
Deep Learning Tutorial
 
NUS PhD e-open day 2020
NUS PhD e-open day 2020NUS PhD e-open day 2020
NUS PhD e-open day 2020
 
Deep Learning Cases: Text and Image Processing
Deep Learning Cases: Text and Image ProcessingDeep Learning Cases: Text and Image Processing
Deep Learning Cases: Text and Image Processing
 
NIPS2017 Few-shot Learning and Graph Convolution
NIPS2017 Few-shot Learning and Graph ConvolutionNIPS2017 Few-shot Learning and Graph Convolution
NIPS2017 Few-shot Learning and Graph Convolution
 
Predicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman networkPredicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman network
 
ECCV2010: feature learning for image classification, part 4
ECCV2010: feature learning for image classification, part 4ECCV2010: feature learning for image classification, part 4
ECCV2010: feature learning for image classification, part 4
 
Deep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical MethodologyDeep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical Methodology
 
Deep learning: what? how? why? How to win a Kaggle competition
Deep learning: what? how? why? How to win a Kaggle competitionDeep learning: what? how? why? How to win a Kaggle competition
Deep learning: what? how? why? How to win a Kaggle competition
 
Deep Neural Networks for Multimodal Learning
Deep Neural Networks for Multimodal LearningDeep Neural Networks for Multimodal Learning
Deep Neural Networks for Multimodal Learning
 
Convolutional neural networks for image classification — evidence from Kaggle...
Convolutional neural networks for image classification — evidence from Kaggle...Convolutional neural networks for image classification — evidence from Kaggle...
Convolutional neural networks for image classification — evidence from Kaggle...
 
Cheat sheets for AI
Cheat sheets for AICheat sheets for AI
Cheat sheets for AI
 
Fcv learn yu
Fcv learn yuFcv learn yu
Fcv learn yu
 
Deep Learning in Computer Vision
Deep Learning in Computer VisionDeep Learning in Computer Vision
Deep Learning in Computer Vision
 
Learn to Build an App to Find Similar Images using Deep Learning- Piotr Teterwak
Learn to Build an App to Find Similar Images using Deep Learning- Piotr TeterwakLearn to Build an App to Find Similar Images using Deep Learning- Piotr Teterwak
Learn to Build an App to Find Similar Images using Deep Learning- Piotr Teterwak
 

Similar to The Concurrent Constraint Programming Research Programmes -- Redux (part2)

nlp dl 1.pdf
nlp dl 1.pdfnlp dl 1.pdf
nlp dl 1.pdfnyomans1
 
Two methods for optimising cognitive model parameters
Two methods for optimising cognitive model parametersTwo methods for optimising cognitive model parameters
Two methods for optimising cognitive model parametersUniversity of Huddersfield
 
Applying Linear Optimization Using GLPK
Applying Linear Optimization Using GLPKApplying Linear Optimization Using GLPK
Applying Linear Optimization Using GLPKJeremy Chen
 
Computational Intelligence Assisted Engineering Design Optimization (using MA...
Computational Intelligence Assisted Engineering Design Optimization (using MA...Computational Intelligence Assisted Engineering Design Optimization (using MA...
Computational Intelligence Assisted Engineering Design Optimization (using MA...AmirParnianifard1
 
Unsupervised program synthesis
Unsupervised program synthesisUnsupervised program synthesis
Unsupervised program synthesisAmrith Krishna
 
Model-based GUI testing using UPPAAL
Model-based GUI testing using UPPAALModel-based GUI testing using UPPAAL
Model-based GUI testing using UPPAALUlrik Hørlyk Hjort
 
Parallel Computing 2007: Bring your own parallel application
Parallel Computing 2007: Bring your own parallel applicationParallel Computing 2007: Bring your own parallel application
Parallel Computing 2007: Bring your own parallel applicationGeoffrey Fox
 
Minimal Introduction to C++ - Part I
Minimal Introduction to C++ - Part IMinimal Introduction to C++ - Part I
Minimal Introduction to C++ - Part IMichel Alves
 
Informatics Practices (new) solution CBSE 2021, Compartment, improvement ex...
Informatics Practices (new) solution CBSE  2021, Compartment,  improvement ex...Informatics Practices (new) solution CBSE  2021, Compartment,  improvement ex...
Informatics Practices (new) solution CBSE 2021, Compartment, improvement ex...FarhanAhmade
 
[PR12] PR-036 Learning to Remember Rare Events
[PR12] PR-036 Learning to Remember Rare Events[PR12] PR-036 Learning to Remember Rare Events
[PR12] PR-036 Learning to Remember Rare EventsTaegyun Jeon
 
High-Performance Haskell
High-Performance HaskellHigh-Performance Haskell
High-Performance HaskellJohan Tibell
 
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...Transfer Learning for Improving Model Predictions in Highly Configurable Soft...
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...Pooyan Jamshidi
 
Inria Tech Talk - La classification de données complexes avec MASSICCC
Inria Tech Talk - La classification de données complexes avec MASSICCCInria Tech Talk - La classification de données complexes avec MASSICCC
Inria Tech Talk - La classification de données complexes avec MASSICCCStéphanie Roger
 
Aad introduction
Aad introductionAad introduction
Aad introductionMr SMAK
 
Data_Structure_and_Algorithms_Lecture_1.ppt
Data_Structure_and_Algorithms_Lecture_1.pptData_Structure_and_Algorithms_Lecture_1.ppt
Data_Structure_and_Algorithms_Lecture_1.pptISHANAMRITSRIVASTAVA
 

Similar to The Concurrent Constraint Programming Research Programmes -- Redux (part2) (20)

ML unit-1.pptx
ML unit-1.pptxML unit-1.pptx
ML unit-1.pptx
 
nlp dl 1.pdf
nlp dl 1.pdfnlp dl 1.pdf
nlp dl 1.pdf
 
Two methods for optimising cognitive model parameters
Two methods for optimising cognitive model parametersTwo methods for optimising cognitive model parameters
Two methods for optimising cognitive model parameters
 
Applying Linear Optimization Using GLPK
Applying Linear Optimization Using GLPKApplying Linear Optimization Using GLPK
Applying Linear Optimization Using GLPK
 
R Language Introduction
R Language IntroductionR Language Introduction
R Language Introduction
 
Computational Intelligence Assisted Engineering Design Optimization (using MA...
Computational Intelligence Assisted Engineering Design Optimization (using MA...Computational Intelligence Assisted Engineering Design Optimization (using MA...
Computational Intelligence Assisted Engineering Design Optimization (using MA...
 
Unsupervised program synthesis
Unsupervised program synthesisUnsupervised program synthesis
Unsupervised program synthesis
 
Model-based GUI testing using UPPAAL
Model-based GUI testing using UPPAALModel-based GUI testing using UPPAAL
Model-based GUI testing using UPPAAL
 
3.5
3.53.5
3.5
 
Parallel Computing 2007: Bring your own parallel application
Parallel Computing 2007: Bring your own parallel applicationParallel Computing 2007: Bring your own parallel application
Parallel Computing 2007: Bring your own parallel application
 
Minimal Introduction to C++ - Part I
Minimal Introduction to C++ - Part IMinimal Introduction to C++ - Part I
Minimal Introduction to C++ - Part I
 
Informatics Practices (new) solution CBSE 2021, Compartment, improvement ex...
Informatics Practices (new) solution CBSE  2021, Compartment,  improvement ex...Informatics Practices (new) solution CBSE  2021, Compartment,  improvement ex...
Informatics Practices (new) solution CBSE 2021, Compartment, improvement ex...
 
[PR12] PR-036 Learning to Remember Rare Events
[PR12] PR-036 Learning to Remember Rare Events[PR12] PR-036 Learning to Remember Rare Events
[PR12] PR-036 Learning to Remember Rare Events
 
Signals and Systems Homework Help.pptx
Signals and Systems Homework Help.pptxSignals and Systems Homework Help.pptx
Signals and Systems Homework Help.pptx
 
modeling.ppt
modeling.pptmodeling.ppt
modeling.ppt
 
High-Performance Haskell
High-Performance HaskellHigh-Performance Haskell
High-Performance Haskell
 
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...Transfer Learning for Improving Model Predictions in Highly Configurable Soft...
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...
 
Inria Tech Talk - La classification de données complexes avec MASSICCC
Inria Tech Talk - La classification de données complexes avec MASSICCCInria Tech Talk - La classification de données complexes avec MASSICCC
Inria Tech Talk - La classification de données complexes avec MASSICCC
 
Aad introduction
Aad introductionAad introduction
Aad introduction
 
Data_Structure_and_Algorithms_Lecture_1.ppt
Data_Structure_and_Algorithms_Lecture_1.pptData_Structure_and_Algorithms_Lecture_1.ppt
Data_Structure_and_Algorithms_Lecture_1.ppt
 

The Concurrent Constraint Programming Research Programmes -- Redux (part2)

  • 1. IBM Research: Computing as a Service Combinatorial Problem Solving in C10 How to write programmable solvers declaratively Vijay Saraswat <firstname>@<lastname>.org IBM TJ Watson Sep 9, 2014 © 2005 IBM Corporation Computing as a Service 1
  • 2. CCP Research Programmes Logic Programming  In the early 80s, Colmerauer / Kowalski and colleagues developed definite clause logic programming based on a procedural interpretation of proofs (“right hand side” computation, backward chaining) – Operationally one gets non-deterministic user-defined recursive procedures, accumulating constraints over Herbrand terms. – Logically, given an atom p(X1, …, Xn), the system generates bindings X1=t1, …, Xn=tn sufficient to establish the truth of the atom, given the program clauses. – Multiple (implicitly disjunctive) answers may be returned. conjunction disjunction © 2009 IBM Corporation IBM Research 2 (Goal) G ::= H|G,G|G;G (Program) P ::= H:-G|P,P (Atom) H ::= p(t1,…,tn) X1=t1, …, Xn=tn ?- p(X1,…,Xn) Answer returned by Query posed by user system
  • 3. CCP Research Programmes Prolog 3 and Constraint Logic Programming  At POPL 87, Jaffar and Lassez showed how this framework could be extended to (essentially) arbitrary constraints, implemented by an embedded (black box) constraint solver – Atomic formulas drawn from some underlying constraint theory (over a vocabulary disjoint from the user-defined predicates E)  CLP(R) is an exemplar, permitting linear arithmetic constraints over R.  Prolog III can also be viewed as an instance, over a different, rich constraint system © 2009 IBM Corporation IBM Research 3 G ::= c c1, …, ck ?- p(X1,…,Xn)
  • 4. CCP Research Programmes However, …  This framework is not flexible enough to permit the user to specify propagation rules, or search strategies in logic.  Often constraint solvers are incomplete, or highly combinatorial and user-defined propagation rules are critical (cf cc(FD))  Similarly often critical are search strategies, e.g. run all propagation rules, then expand (non-deterministically) Research the atom with the fewest choices left (cf CHIP) IBM © 2009 IBM Corporation 4 X in {1,2,3}, X!=Y, Y=2 ?- X in {1,3}
  • 5. CCP Research Programmes Claim: (Timed) CCP provides that framework  Concurrent Constraint Programming is a logical framework based, dually, on the notion of Agents, not Goals, on “left hand side” computation (“forward chaining”)  Operationally one gets non-deterministic user-defined recursive procedures, accumulating constraints in a shared store.  Implicative agents (if c D) trigger on the presence of a constraint c in the store, and take further actions. (Agent) D ::= E|c|D,D |D;D|if c D (Program) P ::= E-:D|P,P (Atom) E ::= p(t1,…,tn) p(X1,…,Xn) ?- c1 ; … ; ck Entailed constraints determined by system Agent proposed by user Research  Disjunction is “angelic” – on the LHS, if B entails A, then (A;B) is identical to A, i.e IBM the more general solution (A) is kept. © 2009 IBM Corporation 5 X in {1,2,3}, X!=Y, Y=2 ?- X in {1,3}
  • 6. CCP Research Programmes Example: Map Coloring © 2009 IBM Corporation IBM Research 6 is specializable to individual backend solvers, so they can control what form constraints end up in. In particular, MiniZinc allows the specification of global constraints by decomposition. 2 Basic Modelling in MiniZinc In this section we introduce the basic structure of a MiniZinc model using two simple exam-ples. 2.1 Our First Example Figure 1: Australian states. As our first example, imagine that we wish to colour a map of Australia as shown in Figure 1. It is made up of seven different states and territories each of which must be given a colour so that adjacent regions have different colours. We can model this problem very easily in MiniZinc. The model is shown in Figure 2. The first line in the model is a comment. A comment starts with a ‘%’ which indicates that the rest of the line is a comment. MiniZinc has no begin/ end comment symbols (such as C’s / * and * / comments). The next part of the model declares the variables in the model. The line 3 class OzMap(N:Int) { type Colors=1..N. enum States={wa,nt,sa,nsw,v,t,q}. X:Map[States,Colors]. agent map { X(wa)!= X(nt), X(wa)!=X(sa), X(nt)!= X(sa), X(nt)!=X(q), X(sa)!= X(q), X(sa)!=X(nsw), X(sa)!=X(v), X(q) != X(nsw),X(nsw)!=X(v), all (i in X.domain) choose(X(i), 0, X(i).values) } agent choose[T](X:T, i:Int, v:Rail[T]){ if (i < V.size) Note: type-generic choose X=v(i); choose(X, i+1, v) } } O=new OzMap(3), O.map |- X(wa)=1,X(nt)=2, X(sa)=3, X(qa)=1, X(nsw)=2, X(v)=1, X(t)=1 ? But no control between propagation and choice!
  • 7. CCP Research Programmes Another example: Zebra © 2009 IBM Corporation IBM Research 7 class Zebra { enum Nationalities = {english, spanish, ukrainian, norwegian, japanese}. enum Colors = {red, green, ivory, yellow, blue}. enum Animals = {dog, fox, horse, zebra, snails}. enum Drinks = {coffee, tea, milk, orangeJuice, water}. enum Cigarettes = {oldGold, kools, chesterfields, luckyStrike, parliaments}. type Houses = Int(0,4). vars: Array(0..4,0..4)[Houses]. Nation = variables(0), Color = variables(1), Animal = variables(2), Drink = variables(3), Smoke = variables(4). agent rightof(h1:Houses, h2:Houses){ h1=h2+1} agent nextto(h1:Houses, h2:Houses) { rightof(h1,h2) ; rightof(h2,h2)} agent middle(h:Houses) {h=3} agent left(h1:Houses) {h=1} agent constraints { alldifferent(Nation), alldifferent(Color), alldifferent(Animal), alldifferent(Drink), alldifferent(Smoke), Nation(english) = Color(red), Nation(spaniard) = Animal(dog), Drink(coffee) = Color(green), Nation(ukrainian) = Drink(tea), rightof(Color(green), Color(ivory)), …}
  • 8. CCP Research Programmes Zebra – but the “control” is programmable agent alldifferent[T](A:Array[T]) { all (i in A.domain, j in A.domain{j !=i}) A(j) != A(i) // assuming != available as a constraint on two variables } © 2009 IBM Corporation IBM Research 8 // Alternate agent alldifferent[T](A:Array[T]) { all (i in A.domain, j in A.domain{j !=i}) all (k in A(i).domain) if (A(i) = k) A(j) != k // removes k from A(j)’s domain } alldifferent/1 – a “global constraint” – is just a user-defined propagator. If A(i)=k (for any value k and index i) then k is removed from the domain of all other variables A(j).
  • 9. CCP Research Programmes But how do we ensure propagation before choice?  Use time – Timed CCP.  TCC is obtained by extending CC “uniformly” across time. A CC computation is run at time t (starting with t=0) till quiescence. Then all agents A s.t. the current time instant has the agent next A are collected, and executed at the next time instant. – Thus TCC provides a logical way to arrange executions in a total order.  In TCC, the store may be changed non-monotonically between time instants. Research  This is not possible in Punctuated CCP – here all constraint tells are done within an always, hence constraints persist IBM across time. © 2009 IBM Corporation 9 (Agent) D ::= next c D (Program) P ::= E-:D|P,P (Atom) E ::= p(t1,…,tn) p(X1,…,Xn) ?- (c1 1 ,next(c1 2,next(…c1 m 1)…)); …; (ck 1 ,next(ck 2, next(…ck m k)…)) Entailed constraints, across time determined by the system Agent proposed by user The result is (c1 m 1; …; ck m k).
  • 10. CCP Research Programmes Zebra – but the “control” is programmable © 2009 IBM Corporation IBM Research 10 agent solve { (I,J) = argmin((i,j:vars.domain)=>values(vars(i,j)).size), if (values(vars(I,J)).size > 1) next { always choose(vars(I,J)), next solve() } } values is the only “primitive” indexical – returns a rail of values for its argument variable. solve implements the strategy of alternating between time instants in which propagation happens, and time instants in which the decision is made of the variable to split. It terminates when no more variables are left to split. (I,J) is the index s.t. values(var(I,J)).size is minimized. If there are k > 1 values, choose (i.e. branch disjunctively k-ways) in the next time instant. This will automatically cause propagation (in that time instant).
  • 11. CCP Research Programmes Zebra – but the “control” is programmable © 2009 IBM Corporation IBM Research 11 public def agent main(String[Rail]) { z = new Zebra, always z.constraints, always all (i,j in z.vars.domain) if (vars(i.j).size <=1) choose(vars(I,J)) next z.solve } The main method. Creates a new Zebra problem, asserts its constraints in all time instants, and sets up a propagator that forces the value of any variable as soon as it has only one value left in its extent. Also sets up the solve agent to alternate between propagation and choice
  • 12. CCP Research Programmes (Aside: In fact RCC gives you CCP+CLP and more)  Much richer capabilities in RCC, while staying within the paradigm – Fully recursive goals (CLP) are available. – Agents with deep guards permit triggers to be recursively defined – Goals with deep guards permit conditional recursive augmentation of the store, for the purposes of answering the goal – Universal goals permit parametric goal solving © 2009 IBM Corporation IBM Research 12 (Agent) D ::= E|c|D,D |D;D|if G D |all X D |some X D (Goal) G ::= H|c|G,G |G;G|if D G |all X D |some X D (Program) P ::= E-:D|P,P (Atom) E ::= p(t1,…,tn) p(X1,…,Xn) ?- c1 ; … ; ck
  • 13. CCP Research Programmes Crossgrams  A puzzle suggested to me by Gopal Raghavan.  Find words w in the English language which are such that for each position I in w there is a distinct anagram of w starting with the letter at the i’th position. – Example: emits – the corresponding anagrams are mites, items, times, smite.  Here we consider the simpler version that drops the distinctness requirement (just to keep the program slightly smaller). © 2009 IBM Corporation IBM Research 13
  • 14. CCP Research Programmes Key to solution  Given a list of N dictionary words W, generate N facts word(W, L, S, Ws) where – L is the first character of W, – Ws is a (possibly non-English word) anagram of W in which all letters are in increasing sort order, and – S is the first letter of Ws. – E.g. word(emits, e, e, eimst) / word(items, i, e, eimst) / word(mites, m, e, eimst), …  With these clauses, crossgrams can be generated quickly by backtracking (assuming facts are indexed appropriately, as they are in many Prolog systems). © 2009 IBM Corporation IBM Research 14
  • 15. CCP Research Programmes Crossgrams: illustrates use of if D G goals © 2009 IBM Corporation IBM Research 15 class Crossgram { @Gentzen def word(Atom, Char, Char, Atom). goal crossgrams(Dict: List[Atom]) = R!{ if (all (W in Dict) { W = name(Wls), Wsls = Wls.msort(), word(W, Wls.head, Wsls.head, name(Wsls)) }) words(R) } goal words(R:List[Atom]) { word(A,C,C,Ws), A=name(Wls), R=List(A, Wls.tail.map((L:Char)=>W!{word(W,L,_,Ws)})) }} Assert word/4 constraints on the fly, based on the dictionary, add them to the store. Solve goal in context of generated constraints This could fail, triggering backtracking.
  • 16. CCP Research Programmes Conclusion  (Timed) CCP provides a dual logic programming framework which is much closer to more conventional “constraint programming” than CLP.  In particular, TCC permits the use of user-defined propagators and search procedures.  R[T]CC considerably enriches programming power, beyond TCC. © 2009 IBM Corporation IBM Research 16