SlideShare ist ein Scribd-Unternehmen logo
1 von 33
INTERACTIVE LEARNING OF MONOTONE BOOLEAN FUNCTIONS
                                                 Last revision: October 21, 1995




                     BORIS KOVALERCHUK*, EVANGELOS TRIANTAPHYLLOU*1,
                       ANIRUDDHA A. DESHPANDE*, AND EUGENE VITYAEV**

*        Department of Industrial and Manufacturing Systems Engineering, 3128 CEBA Building, Louisiana State
         University, Baton Rouge, LA 70803-6409, U.S.A. E-mail: borisk@unix1.sncc.lsu.edu,
         ietrian@lsuvm.sncc.lsu.edu, or adeshpa@unix1.sncc.lsu.edu

**       Institute of Mathematics, Russian Academy of Science, Novosibirsk, 630090, Russia.
         E-mail: vityaev@math.nsk.su


Abstract: This paper presents some optimal interactive algorithms for learning a monotone Boolean function.
These algorithms are based on the fundamental Hansel lemma (Hansel, 1966). The advantage of the algorithms is
that they are not heuristics, as is often the case of many known algorithms for general Boolean functions, but they
are optimal in the sense of the Shannon function. This paper also formulates a new problem for the joint
restoration of two nested monotone Boolean functions f1 and f2. This formulation allows further decreasing the
dialogue with an expert and restore non-monotone functions of the form f2& f1. The effectiveness of the
proposed approach is demonstrated by some illustrative computational experiments related to engineering and
medical applications.

Key Words. Learning from Examples, Boolean Functions, Monotone Boolean Functions, Shannon Function.




1.       INTRODUCTION

         An interesting problem in machine learning is the one which involves the inference of a Boolean function

from collections of positive and negative examples. This is also called the logical analysis problem (Hammer and

Boros, 1994) and is a special case of inductive inference. This kind of knowledge extraction is desirable when one

is interested in deriving a set of rules which, in turn, can be easily comprehended by a field expert. In many

application domains the end users do not have sophisticated computer and modeling expertise. As result, systems

which are based on techniques such as neural networks, statistics, or linear programming, are not appealing to

them nor these methods provide a plausible explanation of the underlying decision-making process. On the other

hand, a logical analysis approach, when it is applicable, can result in rules which are already known to the end user

(thus increasing his/her confidence on the method) or lead to new discoveries. In other words, a logical analysis

     1
                            The first two authors gratefully acknowledge the support from the Office of Naval Research
                    (ONR) grant N00014-95-1-0639.



                                                               1
approach has the distinct advantage of high comprehensibility when it is compared with other methods and to the

discovery of new explicit knowledge.

        The most recent advances in distinguishing between elements of two pattern sets can be classified into six

distinct categories. These are: a clause satisfiability approach to inductive inference by Kamath et al. (1992,

1994); some modified branch-and-bound approaches of generating a small set of logical rules by Triantaphyllou et

al. (1994), and Triantaphyllou (1994); some improved polynomial time and NP-complete cases of Boolean

function decomposition by Boros et al. (1994); linear programming approaches by Wolberg and Mangasarian

(1990), and Mangasarian (1995); some knowledge based learning approaches by combining symbolic and

connectionist (neural networks) machine based learning as proposed by Shavlik (1994), Fu (1993), Goldman et al.

(1994) and Cohn et al. (1994) and finally, some nearest neighbor classification approaches by Hattoru and Torii

(1993), Kurita (1991), Kamgar-Parsi and Kanal (1985).        From the above six categories, the first three can be

considered as logical analysis approaches, since they deal with inference of Boolean functions.

        The general approach of pure machine learning and inductive inference includes the following two steps:

(i) obtaining in advance a sufficient number of examples (vectors) for different classes of observations, and (ii)

formulation of the assumptions about the required mathematical structure of the example population (see, for

instance, (Bongard, 1966),, Zagoruiko (1979), (Dietterich and Michalski, 1983), and (Vityaev and Moskvitin,

1993)). Human interaction is used just when one obtains new examples and formulates the assumptions.

        The general problem of learning a Boolean function has many applications. Such applications can be

found in the areas of medical diagnosis, hardware diagnosis, astrophysics and finance among others as it is best

demonstrated by the plethora of databases in the Machine Learning Repository in the University of California, at

Irvine (Murphy and Aha, 1994). However, traditional machine learning approaches have some difficulties. In

particular, the size of the hypothesis space is influential in determining the sample complexity of a learning

algorithm. That is, the expected number of examples needed to accurately approximate a target concept. The

presence of bias in the selection of a hypothesis from the hypothesis space can be beneficial in reducing the sample

complexity of a learning algorithm (Mitchell, 1980), (Natarajan, 1989) and (Goldman and Sloan, 1992). Usually

the amount of bias in the hypothesis space H is measured in terms of the Vapnik-Chervonenkis dimension, denoted

as VCdim(H), (Vapnik, 1982) and (Haussler, 1988). Theoretical results regarding the VCdim(H) are well known

(Vapnik, 1982). The results in (Vapnik, 1982) are still better than some other bounds given in (Blumer, 1989).




                                                         2
However, all these bounds are still overestimates (Haussler and Warmuth, 1993). The learning problem examined

in this paper is how one can infer a Boolean function. We assume that initially some Boolean vectors (input

examples) are available. These vectors are defined in the space {0,1} n, where n is the number of binary attributes

or atoms. Each such vector represents either a positive or a negative example, depending on whether it must be

accepted or rejected by the target Boolean function. In an interactive environment we assume that the user sarts

with an initial set (which may be empty) of positive and negative examples and then he/she asks an oracle for

membership classification of new examples which are selected according to some interactive guided learning

strategy. In (Triantaphyllou and Soyster, 1995) there is a discussion of this issue and a guided learning strategy for

general Boolean functions is presented and analyzed.

        The main challenge in inferring a target Boolean function from positive and negative examples is that the

user can never be absolutely certain about the correctness of the inferred function, unless he/she has used the

entire set of all possible examples which is of size 2n. Apparently, even for a small value of n, this task may be

practically impossible to realize. Fortunately, many real life applications are governed by the behavior of a

monotone system or can be described by a combination of a small number of monotone systems. Roughly

speaking, monotonicity means that the value of the output increases or decreases when the value of the input

increases. A formal definition of this concept is provided in the next section. This is common, for example in

many medical applications, where for instance, the severity of a condition directly depends on the magnitude of the

blood pressure and body temperature within certain intervals. In machine learning monotonicity offers some

unique computational advantages. By knowing the value of certain examples, one can easily infer the values of

more examples. This, in turn, can significantly expedite the learning process. This paper is organized as follows.

The next section provides some basic definitions and results about monotone Boolean functions. The third section

presents some key research problems with highlights of some possible solution approaches. The forth section

describes the procedure for inferring a monotone Boolean function.        The fifth and sixth sections illustrate the

previous procedures in terms of two applications.      Finally, the paper ends with some concluding remarks and

suggestions for possible extensions.




                                                          3
2.       SOME BASIC DEFINITIONS AND RESULTS ABOUT MONOTONE BOOLEAN FUNCTIONS

         Let En denote the set of all binary vectors of length n. Let α and β be two such vectors. Then, the vector

α = (α1,α2,α3,...,αn) precedes the vector β = (β1,β2,β3,...,βn) (denoted as: α ≥ β) if and only if the following is true: α i 

βi, for all 1  i  n. If, at the same time: α ≠ β, then it is said that α strictly precedes β (denoted as: α > β). The two

binary vectors α and β are said to be comparable if one of the relationships α ≥ β or β ≥ α holds.

         A Boolean function f(x) is monotone if for any vectors α,β ∈ En, the relation f(α) ≥ f(β) follows from the

fact that α ≥ β. Let Mn be the set of all monotone Boolean functions defined on n variables. A binary vector α of

length n is said to be an upper zero of a function f(α) ∈ Mn, if f(α) = 0 and, for any vector β such that β ≤ α, we

have f(β) = 0. Also, we shall call the number of unities (i.e., the number of the "1" elements) in vector α as its

level and denote this by U(α). An upper zero α of a function f is said to be the maximal upper zero if we have

U(β)U(α) for any upper zero β of the function f [Kovalerchuk, Lavkov, 1984]. Analogously we can define the

concepts of lower unit and minimal lower unit.                  A binary vector α of length n is said to be a lower unit

of a function f(α) ∈Mn, if f(α) = 1 and, for any vector β from En such that β ≥ α, we get f(β) = 1. A lower unit α of

a function f is said to be the minimal lower unit if we have U(α) ≤ U(β)for any lower unit β of the function f.
                                                                    

         Examples of monotone Boolean functions are: the constants 0 and 1, the identity function f(x) = x, the

disjunction (x1 ∨ x2), the conjunction (x1 ∧ x2), etc.            Any function obtained by a composition of monotone

Boolean functions is itself monotone.        In other words, the class of all monotone Boolean functions is closed.

Moreover, the class of all monotone Boolean functions is one of the five maximal (pre-complete) classes in the

set of all Boolean functions. That is, there is no closed class of Boolean functions, containing all monotone

Boolean functions and distinct from the class of monotone functions and the class of all Boolean functions. The

reduced disjunctive normal form (DNF) of any monotone Boolean function, distinct of 0 and 1, does not

contain negations of variables.       The set of functions {0, 1, (x1 ∨ x2), (x1 ∧ x2)} is a complete system (and

moreover, a basis) in the class of all monotone Boolean functions (Alekseev, 1988).

         For the number ψ(n) of monotone Boolean functions depending on n variables, it is known that:
                                           n
                       ψ (n) = 2  n/2   (1 + ε ( n ) )
                                  
                                  
                                          
                                          
                                                                                                  (2-1)
where 0 < ε(n) < c(logn)/n and c is a constant (see, for instance, (Kleitman, 1969), and (Alekseev, 1988)). Let a

monotone Boolean function f be defined with the help of a certain operator Af (also called an oracle) which when

fed with a vector α = (α1,α2,α3,...,αn), returns the value of f(α). Let F = {F} be the set of all algorithms which can



                                                              4
solve the above problem and ϕ(F, f) be the number of accesses to the operator Af required to solve a given problem

about inferring a monotone function f∈Mn.

        Next, we introduce the Shannon function ϕ(n) as follows (Korobkov, 1965):
                         ϕ( n ) =      min max ϕ( F, f) .                                                    (2-1)
                                         F
                                         F∈          f ∈M n


The problem examined next is that of finding all maximal upper zeros (lower units) of an arbitrary function f∈Mn

with the help of a certain number of accesses to the operator Af. It is shown in (Hansel, 1966) that in the case of

this problem the following relation is true (known as Hansel's theorem or Hansel's lemma):

                                            n              n
                         ϕ( n ) =             +              .                                            (2-2)
                                                             
                                     n / 2     n / 2  +1 
Here n/2 is the floor (the closest integer number to n/2 which is no greater than n/2).     In terms of machine

learning the set of all maximal upper zeros represents the border elements of the negative pattern.           In an

analogous manner the set of all minimal lower units to represent the border of a positive pattern. In way a

monotone Boolean function represents two "compact patterns".

        It has been shown (Hansel, 1966) that restoration algorithms for monotone Boolean functions which use

Hansel's lemma are optimal in terms of the Shannon function. That is, they minimize the maximum time

requirements of any possible restoration algorithm. It is interesting to note at this point that, to the best of our

knowledge, Hansel's lemma has not been translated into English, although there are numerous references of it in

the non English literature (Hansel wrote his paper in French). This lemma is a one of the final results of the long

term efforts in Monotone Boolean functions beginning from Dedekind (1897, in German). One approach to use

monotonicity in combination with existing pattern recognition/classification approaches is to consider each one of

the available points and apply the concept of monotonicity to generate many more data. However, such an

approach would have to explicitly consider a potentially humongous set of input observations. On the other hand,

these classification approaches are mostly NP-complete and very CPU time consuming. Thus, the previous

approach would be inefficient.

        In (Gorbunov and Kovalerchuk, 1982) an approach for restoring a single monotone Boolean function is

presented. That approach implicitly considers all derivable data from a single observation by utilizing Hansel

chains of data (Hansel, 1966). That algorithm is optimal in the sense of the Shannon function. However, although




                                                         5
this result has also been known in the non English literature (it was originally published in Russian), to the best of

our knowledge, it has not yet been described in the English literature.

        We will call a monotone Boolean function to be an increasing (isotone) monotone Boolean function in

contrast with a decreasing monotone Boolean function. A Boolean function is decreasing (antitone) monotone,

if for any vectors α,β∈En, the relation f(α) ≥  f(β) follows from the fact that α ≤ β (Rudeanu, 1974, p.149).

        Each general discrimination (i.e., a general Boolean function) can be described in terms of several

increasing gi(x1, ...,xn) and decreasing hi(x1, ...,xn) monotone Boolean functions (Kovalerchuk et al., 1995). That is,

the following is always true:


                                           ∨ ( g ( x )∧ h ( x ) ).
                                           m
                                q( x ) =         j        j                                                       (2-3)
                                           j=1

        Next, let us consider the case in which q(x) = g(x) ∧ h(x).       Here: q+ = g+ ∩ h+, where: q+ = {x: q(x)=1},

g+ = {x: g(x)=1}, and h+ = {x: h(x)=1}. Therefore, one can obtain the set of all positive examples for q as the

intersection of the sets of all positive examples for the monotone functions g and h.

        For a general function q(x), represented as in (2-3), the union of all these intersections gives the full set of

positive examples: q+ = ∪q+j = ∪ (g+j ∩ h+j). Often, we do not need so many separate monotone functions. The

union of all conjunctions, which do not include negations i forms a single increasing monotone Boolean function

(see, for instance, (Yablonskii, 1986) and (Alekseev, 1988)).



3.      SOME KEY PROBLEMS AND ALGORITHMS

In this section we present some key problems and the main steps of algorithms for solving them.



PROBLEM 1: (Inference of a monotone function with no initial data)



Conditions:

There are no initial examples to initiate learning. All examples should be obtained as a result of the interaction of

the designer with an oracle (i.e., an operator Af).    It is also required that the discriminant function should be a

monotone Boolean function. This problem is equivalent to the requirement that we consider only two compact

monotone patterns. These conditions are natural in many applications. Such applications include the estimation of




                                                              6
reliability ( see also the illustrative examples next in this section for problem 2).



Algorithm A1:

Step 1: The user is asked to confirm function monotony.

Step 2: Apply an iterative algorithm for generating examples and construction of the DNF representation. Do so

                 by using Hansel's lemma (to be described in section 4, see also (Hansel, 1966) and (Gorbunov and

                 Kovalerchuk, 1982)). This algorithm is optimal according to relations (2-1) and (2-2) in section

                 2.



PROBLEM 2: (The connected classification problem)

Conditions:

We should learn to classify an arbitrary vector as a result of the interaction of the designer with an oracle (i.e., an

operator Af). The discriminant functions should be a combination of some monotone Boolean functions. There

are some initial examples for learning for the connected classification problems simultaneously, with monotony

supposition for both of them and the supposition that the positive patterns are nested.

        Formally, the above consideration means that for all α ∈ En, the following relation is always true: f2(α) 

f1(α) (where f1 and f2 are the discriminant monotone Boolean functions for the first and the second problems,

respectively).    The last situation is more complex than the previous one. However, the use of additional

information from both problems allows for the potential to accelerate the rate of learning.



Some Examples of Nested Problems

First illustrative example: The engineering reliability problem

        For illustrative purposes consider the problem of classifying the states of some system by a reliability

related expert. This expert is assumed to have worked with this particular system for a long term and thus can

serve as an "oracle" (i.e., an operator denoted as Af). States of the system are represented by binary vectors from

En (binary space defined on n 0-1 attributes or characteristics).           The "oracle" is assumed that can answer

questions such as: "Is reliability of a given state guaranteed?" (Yes/No) or: "Is an accident for a given state

guaranteed?" (Yes/No). In accordance with these questions, we pose two interrelated nested classification tasks.




                                                            7
The first one is for answering the first question while the second task is for answering the second question. Next,

we define the four possible patterns which are possible in this situation.

        Task 1:

          Pattern 1.1: "Guaranteed reliable states of the system" (denoted as E+1).

          Pattern 1.2: "Reliability of the states of the system is not guaranteed" (denoted as E—1).

        Task 2:

          Pattern 2.1: "States of the system with some possibility for normal operation" (denoted as E+2).

          Pattern 2.2: "States of the system which guarantee an accident" (denoted as E—2).



        Our goal is to extract the way the system functions in the form of two discriminant Boolean functions f2

and f1. The first function is related to task 1, while the second function is related to task 2 (as defined above).

Also observe, that the following relations must be true: E+2 ⊃ E+1 and f2(α)  f1(α) for all α∈ En, describing the

system state, where f1(α) and f2(α) are the discriminant monotone Boolean functions for the first and second tasks,

respectively.



Second illustrative example: Breast cancer diagnosis

        For the second illustrative example consider the following nested classification problem related to breast

cancer diagnosis.    The first sub-problem is related to the clinical question of whether a biopsy or short term

follow-up is necessary or not. The second sub-problem is related to the question whether the radiologist believes

that the current case is highly suspicious for malignancy or not. It is assumed that if the radiologist believes that

the case is malignant, then he/she will also definitely recommend a biopsy. More formally, these two sub-problems

are defined as follows:

        The Clinical Management Sub-Problem:

        One and only one of the following two disjoint outcomes is possible.

                  1) "Biopsy/short term follow-up is necessary", or:

                  2) "Biopsy/short term follow-up is not necessary".

        The Diagnosis Sub-Problem:

        Similarly as above, one and only one of two following two disjoint outcomes is possible. That is, a given




                                                          8
case is:

                   1) "Highly suspicious for malignancy", or:

                   2) "Not highly suspicious for malignancy".



It can be easily seen that the corresponding states satisfy the nesting conditions of the first illustrative example.



Third illustrative example: Radioactivity contamination detection

        The third illustrative example is also diagnosis related. The issue now is how to perform radioactivity

contamination tests. Usually, the more time demanding a test is, the more accurate the result will be. Therefore, an

efficient strategy would be to perform the less time demanding tests first, and if need rises, to perform the more

time demanding (and also more accurate) tests later (this is analogous to the previous breast cancer problem in

which case a biopsy can yield more accurate results but it is also more expensive). The corresponding two nested

problems can be defined as follows:

        Sub-problem 1: Diagnosis of radioactivity contamination, low risk case:

                   Pattern 1: "Surely contaminated"

                   Pattern 2: "Not necessarily contaminated"

        Sub-problem 2: Very high risk for contamination:

                   Pattern 1: "Extra detection of contamination is necessary"

                   Pattern 2: "Extra detection of contamination is not necessary"

Again, it can be easily seen that the corresponding states satisfy the nesting conditions of the first illustrative

example. Next the outline of an algorithm for solving this kind of problems is presented.



Algorithm A2:

Step 1: The user is asked to confirm the monotony for both tasks (i.e., the monotonicity of the functions

                   underlying the patterns related to the previous two tasks).

Step 2: Testing of monotony of the initial examples for both tasks.

Step 3: Reject the examples which violate monotony.

Step 4: Restoration of f(α) values for the elements of Hansel's chains, using monotony property and known




                                                            9
examples (Hansel, 1966; Gorbunov and Kovalerchuk, 1982), see also section 4.

Step 5: Apply a dual iterative algorithm for the additional examples which were generated based on Hansel's

                 lemma.



Next, we discuss step 5 in more detail:

Step 5.1:        Generate the next vector αi1 for the interaction with Af1.

Step 5.2:        Generate the next vector αi2 for the interaction with Af2 in the same way as for Af1.

Step 5.3:        Estimation of the numbers of vectors (N1,1j) for which we can compute f1(α) and f2(α) without

                 asking oracles Af1 and Af2 by temporarily assuming that fj(αi1) = 1 (for j = 1,2), if f2(αi1) = e, where

                 e stands for empty value.

Step 5.4:        Estimation of the numbers of vectors (N1,0j) for which we can compute f1(α) and f2(α) without

                 asking oracles Af1 and Af2 by temporarily assuming that fj(αi1) = 0 (for j = 1,2), if f2(αi1) = e.

Step 5.5:        Estimation of the numbers of vectors (N2,1j) for which we can compute f1(α) and f2(α) without

                 asking oracles Af1 and Af2 by temporarily assuming that fj(αi2) = 1 (for j = 1,2), if f1(αi2) = e.

Step 5.6:        Estimation of the numbers of vectors (N2,0j) for which we can compute f1(α) and f2(α) without

                 asking oracles Af1 and Af2 by temporarily assuming that fj(αi2) = 0 (for j = 1,2), if f1(αi2) = e.

Step 5.7:        Choose the variant with the maximal number of vectors from steps 5.3-5.6 in order to choose

                 which example, α1 or α2, should be sent for the correct classification by the appropriate oracle.

Comment:

In step 5.7 we realize a local algorithm. This means that we consider just suppositions about fj(αi1) and fj(αi2). We

do not simultaneously consider further suppositions about fj(αi+1,1), fj(αi+1,2), fj(αi+2,1), fj(αi+2,2), etc. for the next

possible results of interaction.   However, this possible extension of algorithm A2 may decrease the number of

interactions, but it also leads to more computations.



4.      AN ALGORITHM FOR RESTORING A MONOTONE BOOLEAN FUNCTION

4.1.    General scheme of the algorithm.

Next we present algorithm RESTORE for the interactive restoration of a monotone Boolean function and two

procedures GENERATE and EXPAND in a pseudo programming language.




                                                           10
ALGORITHM "RESTORE"

Input.            Dimension n of the binary space and access to an oracle Af (which has to be inferred).

Output. A monotone Boolean function restored after a minimal number (according to Shannon criterion as it is

                  given as relation (2-2) in section 2) of calls to the oracle Af.

Method.

1)        Construction of Hansel chains (see section 4.2)

2)        Restoration of a monotone Boolean function starting from chains of minimal length and finishing with

          chains of maximal length. This ensures that the number of membership calls to the oracle Af is no more

          than the limit presented in formula (2-2), which guarantees the number of calls to an oracle Af to be no

          more than the limit presented in formula (2-1).

BEGIN ;
 Set i=1; {initialization}
 DO WHILE (function f(x) is not entirely restored)
   Step 1.       Use procedure GENERATE to generate element ai;
   Step 2.       Call oracle Af to retrieve the value of f(ai);
   Step 3.       Use procedure EXPAND to deduce the values of other elements in Hansel chains (i.e., sequences
                 of examples in En) by using the value of f(ai ), the structure of element ai and the monotony
                 property.
   Step 3.       Set i e i+1;
  RETURN
END;


PROCEDURE "GENERATE": Used to generate an element ai to be classified by the oracle Af.

Input.         Dimension n of the binary space.
Output. The next element to send for classification by the oracle Af.

Method:           Proceeding from minimal to maximal Hansel chains

IF i=1 THEN {where i is the index of the current element}
  BEGIN
               Step 1.1.         Retrieve all Hansel chains of minimal length;
               Step 1.2.         Randomly choose the first chain c1 among the chains retrieved in step 1.1;
               Step 1.3.         Set the first element a1 as the minimal element of chain c1;
  END
ELSE
  BEGIN
   Set k=1 {where k is the index number of a Hansel chain};
   DO WHILE (NOT all Hansel chains are tested)
               Step 2.1.         Find the last element ai of chain Ck, which still has no confirmed f(ai) value;
               Step 2.2.         If step 2.1 did not returned an element ai then randomly select the next Hansel




                                                            11
chain Ck+1 of the same length as the one of the current chain Ck;
                 Step 2.3.          Find the least element ai from chain Ck+1, which still has no confirmed f(ai) value;
                 Step 2.4.          If Step 2.3 did not return an element ai, then randomly choose chain Ck+1 of the
                                    next available length. Go to this chain Ck+1;
                 Step 2.5.          Set k e k+1;
  RETURN
 END



PROCEDURE "EXPAND":                 Used to expand the f(ai) value for other elements using monotony and Hansel
                                    chains.

Input.           The f(ai) value.

Output. An extended set of elements with known f(x) values.

Method:          The method is based on the properties: if x  ai and f(ai)=1, then f(x)=1; and if x  ai and f(i)=0,

                 then f(x)=0.

BEGIN

  Step 1.                 Generate a set {x} of comparable elements with ai , where x ≥ai or x≤ai;
  Step 2.                 If f(ai)=1, then ∀ x (x ≥ ai) set f(x)=1;
                          If f(ai)=0, then ∀ x (x ≤ ai) set f(x)=0;
  Step 3.                 Store the f(x) values which were obtained in step 2;
END


4.2.     Construction of Hansel Chains

Several steps in the previous algorithms deal with Hansel chains. Next we describe how to construct all Hansel

chains for a given state space En of dimension n. First, we give a formal definition of a general chain.

         A chain is a sequence of binary vectors (examples) a1 ,a2 ,...,aq , such that ai+1 is obtained from ai by

changing a "0" element to "1". That is, there is an index j such that ai,j = 0, ai+1,j = 1 and for any t≠j the following

is true ai,t=ai+1,t. For instance, the following list <01000, 01100, 01110> of three vectors is a chain.

         To construct all Hansel chains we will use an iterative procedure as follows:          Let En ={0,1}n be the

n-dimensional binary cube.       All chains for En are constructed from chains for En-1. Therefore, we begin the

construction for En by starting with E1 and iteratively proceed to En.



         (i) Chains for E1.

For E1 there is only a single (trivial) chain and it is <0, 1>. Note that "0" is the first element (vector of size 1)




                                                           12
while "1" is the second element.



         (ii) Chains for E2.

         We take E1 and add at the beginning of each one of its chains the element "0". Thus, we obtain the set

{00, 01}. This set is called E2min. Similarly, by adding "1" to E1 = {0, 1} we construct the set E2max = {10, 11}.

Both E2min and E2max are isomorphic to E1, because they have the isomorphic chains <00,01> and <10,11>. The

union of E2min and E2max is E2.    That is, E2 = E2min ∪ E2max. Observe that the chains <00,01> and <10,11> cover

entirely E2 but they are not Hansel chains. To obtain Hansel chains we need to modify them as follows.

         Adjust the chain <00,01> by adding the maximum element (11) from the chain <10,11>. That is, obtain a

new chain <00,01,11>. Also reject element (11) from the chain <10,11>. Hence, we obtain the two new chains:

<00,01,11> and <10>.       These chains are the Hansel chains (Hansel, 1966) for E2. That is E2 = {<00,01,11>,

<10>}.



         (iii) Chains for E3.

         The Hansel chains for E3 are constructed in a manner similar to the chains for E2. First, we double and

adjust the chains of E2 to obtain E3min and E3max. The following relation is also true:

                          E3 = E3min ∪ E3max,

where E3min = {<000,001,011>, <010>} and E3max = {<100,101,111>,<110>}. We do the same chain modification

as for E2.    That is, first we choose two isomorphic chains.           At first, let it be two maximal length chains

<000,001,011> and <100,101,111>. We add the maximal element (111) from <100,101,111> to <000,001,011>

and drop it from <100,101,111>. In this way we obtain the two new chains: <000,001,011, 111> and <100,101>.

Next, we repeat this procedure for the rest of the isomorphic chains <010> and <110>. In this simple case we will

have just one new chain <010, 110> (note that the second chain will be empty). Therefore, E3 consists of the three

Hansel chains <010,110>, <100,101> and <000,001,011,111>. That is, the following is true:

                          E3 = {<010,110>, <100,101>, <000,001,011,111>}.

         In the similar manner one can construct the Hansel chains for E4, E5 and so on.         Below we present a

general scheme for En under the supposition that we have all Hansel chains for En-1. Note that the Hansel chains of

En-1 can be obtained recursively from the Hansel chains of E1 , E2 , ..., En-2.




                                                           13
Chains for En.

          Suppose that En-1 = {C1 ,...,Ci ,..., Ck}, where each Ci is a Hansel's chain for En-1. First, we double En-1 and

by using the "0" and "1" elements as before we construct the two isomorphic chains:

          Enmin = {0C1, 0C2 ,..., 0Ck}

and                 Enmax = {1C1, 1C2, ..., 1Ck}.

The sets Enmin and Enmax entirely cover En. That is, the following is true:

          En = Enmin ∪ Enmax.

Next, let us consider chain Ci = {ci1, ...,cij ..., cim(i)} and the pair of isomorphic chains 0Ci and 1Ci, in which 0Ci =

{0ci1, ..., 0cij, ..., 0cim(i)} and 1Ci = {1ci1, ...,1cij ..., 1cim(i)}. Note that in the previous expressions 0cij and 1cij are

n-dimensional binary vectors and cij is an n- 1 dimensional vector.                     We construct a new pair of Hansel chains

0CiH = {0ci1, ..., 0cij, ...,0cim(i), 1cim(i)} and 1CiH = {1ci1, ..., 1cij, ..., 1cim(i)-1} from 0Ci = {0ci1, ...,0cij, ..., 0cim(i)} and 1Ci

= {1ci1, ..., 1cij, ..., 1cim(i)} by using the same "cut" and "add" procedure as was the case for E3. We repeat this

operation for all pairs of isomorphic chains in Enmin and Enmax to obtain the entire set of Hansel chains for En.

          Next in figures 2 and 3 we illustrate this procedure of chain construction for E1 and E2. Figure 2 shows

the transformation of E1 in E2max and E2min and after that the construction of the new chains from them. The nodes

of a chain are connected with solid lines. At first we have an edge between (10) and (11) then we cut it and obtain

a new chain <00, 01, 11> by combining (11) to the chain <00, 01>. The node (10) forms a trivial chain with a

single element. Figure 3 begins from two copies of the output chain shown on figure 2 with an added "0" to the

first of them and an "1" to the second one.                  The output on figure 3 presents the chains in E3 which are also

connected with solid lines. They obtained with the same "cut and add" procedure.

The justification of above construction of this section is Hansel lemma (Hansel, 1966).

                                                  n   
The binary cube En is covered by                         disjoint chains with following properties:
                                                      
                                             n/2    


                                                               n  n         
a) the number of chains of the length n-2p+1 is   − 
                                                 p   p − 1 , where 0 ≤ p ≤ floor(n/2) [minimal element of
                                                             
                                                          
each if this chain is a vector with p units ("1") and n-p zeros ("0")].

b) for each three sequential elements a1<a2<a3 from a chain the complement to a2 in the interval [a1,a3] belongs to




                                                                    14
a chain of the length n-2p-1.

The vector b is a complement if it forms a square with a1,a2, a3 (see fig.1). Example of the square: a1=(01000),

a2=(01100), a3=(01110) and b=(01010).




Fig.1. Binary square.

              10----------------11      10- - - - - -11

                    E2max

0------------1

  E1

              00----------------01      00----------------01

                    E2min

   Fig 2. Construction of Hansel chains for E2

            E3max

       110----------------111               010 - - - - 110



010-----------011                    010 - - - - - 011



       100----------------101              100-------------------101



000-----------001                    000----------------001

    E3min



   Fig 3. Construction of Hansel chains for E3.




                                                               15
5.      COMPUTATIONAL INTERACTIVE HIERARCHICAL EXPERIMENT I.

5.1     Problem Description

        In this section we demonstrate the previous issues in terms of an illustrative example from the area of

breast cancer diagnosis. It is assumed that some preliminary medical tests have been performed (such as a

mammogram) and the medical doctor wishes to determine what to do next. That is, should he/she proceed with a

biopsy or not?      The problem considered in this section is a nested one.        That is, it is comprised by two

interrelated sub-problems. The first problem is related to the question whether a biopsy is necessary. The second

problem is related to the question whether the human expert (medical doctor) believes that the current case

represents a malignant case or not. It is assumed that if the doctor believes that the case is malignant, then he/she

will also definitely recommend a biopsy. Recall, that in section 3 we defined these two sub-problems as follows:

        The Clinical Management Sub-Problem:

        One and only one of the following two disjoint outcomes is possible.

                   1) "Biopsy/short term follow-up is necessary", or:

                   2) "Biopsy/short term follow-up is not necessary".

        The Diagnosis Sub-Problem:

        Similarly as above, one and only one of two following two disjoint outcomes is possible. That is, a given

        case is:

                   1) "Highly suspicious for malignancy", or:

                   2) "Not highly suspicious for malignancy".



        The previous two interrelated sub-problems can be formulated as the restoration problem of two nested

monotone Boolean functions.          A medical expert was explained the concept of monotonicity and he felt

comfortable with the idea of using monotone Boolean functions.            Moreover, the dialogue which followed

confirmed the validity of this assumption. The underlying Boolean functions were defined on 11 binary atoms.

This simplification was done to illustrate the applicability of the proposed methodology.      More details on this

problem can be found in (Kovalerchuk, Triantaphyllou and Ruiz, 1995).




                                                          16
From the above discussion it follows that now we will work in the 11-dimensional binary space (i.e., in

E11). The two corresponding functions are f1 and f2.            Function f1 returns true (1) value if the decision is

"biopsy/short term followup is necessary", false (0) otherwise. Similarly, function f2 returns true (1) value if the

decision is "highly suspicious for malignancy", false (0) otherwise.

        Full restoration of the functions fi (for i = 1,2) without any optimization on the dialogue process would had

required up to 211=2,048 calls (membership inquires) to an expert (i.e., the Af operator or "oracle") to restore any

Boolean function which is defined on 11 attributes. However, according to the Hansel lemma (i.e., relation (2-2)

in section 2) and under the assumption of monotony, an optimal (i.e., with a minimal number of accesses) dialogue


                                                        11  11
for restoring a monotone Boolean would require at most   +   = 2 x 462 = 924 1 calls to the oracle.
                                                       5 6 
                                                          

Observe that this new value is by 2.2 times smaller than the previous upper limit of 2048 calls. However, even

this upper limit of 924 calls can be reduced even further, as it explained in the next section.

5.2.    Hierarchical Decomposition

        In this particular breast cancer diagnosis problem the medical expert indicated that the original 11 binary

attributes can be organized in terms of a hierarchy. This hierarchy follows from the definition of these 11 medical

oriented binary attributes. Next, it will be shown that this hierarchy can be exploited in reducing the required

number of calls to the "oracle" even further. We will not describe the medical justification of this hierarchy here

(more details can be found in (Kovalerchuk, Triantaphyllou, and Ruiz, 1995)). The hierarchy used in this example

is given as follows:

        Level 1 (5 attributes)               Level 2 (all 11 attributes)
                x1                  →                w1, w2, w3
                x2                  →                y1, y2, y3, y4, y5
                x3                  →                x3
                x4                  →                x4
                x5                  →                x5

In this hierarchy the two attributes x1 and x2 correspond to two monotone Boolean functions which also have to be

restored. That is, the following is true:

                 x1=ϕ(w1, w2, w3),

        and      x2=ψ(y1, y2, y3, y4, y5).




                                                           17
The expert indicated that these two functions ϕ and ψ should be common to both problems (i.e., the biopsy and

cancer problems). Therefore, the following relation is true regarding the fi (for i = 1, 2) and the two ϕ and ψ

functions:

                 fi(x1,x2,x3,x4,x5) = fi(ϕ(w1,w2,w3), ψ(y1,y2,y3,y4,y5), x3,x4,x5)).

        Given the above analysis and decomposition, then according to Hansel lemma (relation (2-2) in section 2)


                                                                                     5 5
it follows that for the restoration problem at level 1 it would had required at most   +   = 20 2 calls to the
                                                                                      2  2
                                                                                        

"oracle". Observe that without the exploitation of the monotone property, an exhaustive dialogue requires 25=32

calls which is by 1.6 times higher. That is, the exploitation of a hierarchy can even benefit the restoration of

general (i.e., not necessarily monotone) Boolean functions.

        The above bounds represent upper limits. When we actually interviewed the medical expert, the target

function f1 was fully restored with only 13 membership inquires (calls). This dialogue is presented in table 1,

column 3. Table 1 is explained in more detail later in sections 5.3 to 5.5. Therefore, the number of the actual calls

was 1.5 times less than the upper limit given by Hansel's lemma and 2.5 times less than the exhaustive search

(which would had required up to 32 calls).               Also, the same 13 calls were adequate to restore each of

f2(x1,x2,x3,x4,x5) and x2=ψ(y1,y2,y3,y4,y5) functions (see also table 1, columns 4 and 5).




                                                              18
Table 1. Dialogue Sequences for Experiment I.
                                                  Extension
 #        Vector     f1   f2   ψ      1i 1                 0 0      f1~ f2~ f1^ f2^
 1         2         3    4    5      6                     7       8 9 10 11
Chain 1.
1.1      (01100)    1*    1*   1*   1.2;6.3;7.3         6.4;7.4     1* 1* 1        1*
1.2      (11100)    1     1    1    6.4;7.4             5.1;3.1     1 1 1          1
Chain 2.
2.1      (01010)    1*    0*   1*   2.2;6.3;8.3         6.1;8.1     1* 0* 1*       0*
2.2      (11010)    1     1*   1    6.4;8.4             3.1         1 1* 1         1*
Chain 3.
3.1      (11000)    1*    1*   1*   3.2                 8.1;9.1     1    1* 1      1*
3.2      (11001)    1     1    1    7.4;8.4             8.2;9.2     1    1 1       1
Chain 4.
4.1      (10010)    1*    0*   1*   4.2;9.3             6.1;9.1     1* 0* 1*       0*
4.2      (10110)    1     1*   1    6.4;9.4             6.2;5.1     1 1* 1         1*
Chain 5.
5.1      (10100)    1*    1*   1*   5.2                 7.1;9.1     1    1* 1      1*
5.2      (10101)    1     1    1    7.4;9.4             7.2;9.2     1    1 1       1
Chain 6.
6.1      (00010)    0*    0    0*   6.2;10.3            10.1        0*   0    0*   0*
6.2      (00110)    1*    1*   0*   6.3;10.4            7.1         1    1*   1*   1*
6.3      (01110)    1     1    1*   6.4;10.5                        1    1    1    1
6.4      (11110)    1     1    1    10.6                            1    1    1    1
Chain 7.
7.1      (00100)    1*    1*   0*   7.2;10.4            10.1        1    1*   1    1*
7.2      (00101)    1     1    0*   7.3;10.4            10.2        1    1    1    1
7.3      (01101)    1     1    1*   7.4;10.5            8.2;10.2    1    1    1    1
7.4      (11101)    1     1    1    5.6                             1    1    1    1
Chain 8.
8.1      (01000)    0*    0    1*   8.2                 10.1        0*   0    0*   0
8.2      (01001)    1*    1*   1    8.3                 10.2        1    1*   1    1*
8.3      (01011)    1     1    1    8.4                 10.5;10.3   1    1    1    1
8.4      (11011)    1     1    1    10.6                9.3         1    1    1    1
Chain 9.
9.1      (10000)    0*    0    1*   9.2                 10.1        0*   0    0*   0*
9.2      (10001)    1*    1*   1    9.3                 10.2        1    1*   1*   1*
9.3      (10011)    1     1    1    9.4                 10.3        1    1    1    1
9.4      (10111)    1     1    1    10.6                10.4        1    1    1    1
Chain 10.
10.1     (00000)    0     0    0    10.2                            0    0    0    0
10.2     (00001)    1*    0*   0    10.3                            1*   0*   1*   0
10.3     (00011)    1     1*   0    10.4                            1    1*   1    1*
10.4     (00111)    1     1    1    10.5                            1    1    1    1
10.5     (01111)    1     1    1    10.6                            1    1    1    1
10.6     (11111)    1     1    1                                    1    1    1    1
Total Calls         13    13   13                                   7    13   22




                                                     19
Next x1 was restored as a function ϕ(w1,w2,w3). In this case the exhaustive search would require 23 = 8


                                                                       1  2 
calls.   On the other hand, Hansel's lemma provides an upper limit of   +   = 6 3 calls. This value is 1.33
                                                                       3  3
                                                                         

times less than what the exhaustive search would had required. Although the previous numbers of calls are not

excessively high, they still provide a sense of the potential of significant savings which can be achieved if

monotony is established and utilized. The numbers in this example are supposed only to be for illustrative

purposes in clarifying the proposed methodology.

            Next, we present the optimal dialogue in restoring the ϕ(w1,w2,w3) function.

                      Chain 1                     Chain 2          Chain 3
            Vectors: <010, 110>            <100, 101> <000, 001, 011, 111>
            Value of ϕ 1* 1                       0* 1* 0 0* 1 1

In the above illustration "1*" and "0*" indicate answers given directly by the expert.            For instance, during the

dialogue the expert decided that: ϕ(010) = 1, ϕ(101) = 1, and ϕ(100) = ϕ(001) = 0. The values without an asterisk

(i.e., when we only have "1" or "0") were obtained by utilizing the monotony property. Recall that these three

Hansel chains for E3 were constructed in section 4 (where the chains for E3 were determined).

            In this experiment the four calls about the function values for the examples (010), (100), (101) and (001)

were sufficient to lead to full restoration.      Monotony allowed us to extend the previous four values for other

examples.        For instance, from the expert's testimony that ϕ(010) = 1, it is derived that ϕ(110) = 1, and from

ϕ(001)=0, it is derived that ϕ(000) = 0. Observe that these 4 calls to the expert are 2 times less than the maximum

number of calls (i.e., 8) and 1.5 times less than the guaranteed number of calls provided by Hansel's lemma (which

is equal to 6).

            The above approach leads to the full definition of the functions ϕ and ψ for level 2 as follows:

            x1     = ϕ(w1,w2,w3,) = w2 ∨ w1w3 ∨ w2w3 = w2 ∨ w1w3,                                                    (5-1)

and

            x2     = ψ(y1,y2,y3,y4,y5) = y1y2∨ y2y3∨ y2y4 ∨ y1y3∨ y1y4 ∨ y2y3y4 ∨ y2y3y5 ∨ y2∨y1∨ y3y4y5 =

                   = y2∨ y1∨ y3y4y5.                                                                                 (5-2)

The above Boolean expressions were determined from the information depicted in table 1. This is explained in

detail in section 5.3.



                                                             20
Similarly, table 1 suggests that for level 1 we have the target functions of x1, x2, x3, x4, x5

for the biopsy sub-problem to be defined as follows:

                  f1(x) = x2x3∨ x2x4∨ x1x2∨ x1x4∨ x1x3∨ x3x4∨ x3∨ x2x5∨x1x5∨x5 =

                        = x2x4∨ x1x2∨ x1x4∨x3∨ x5.                                                                     (5-3)

and for the cancer sub-problem to be defined as:

                  f2(x) = x2x3∨x1x2x4∨x1x2∨x1x3x4∨x1x3∨x3x4∨x3∨x2x5∨x1x5∨ x4x5 =

                        = x1x2∨x3∨x2x5∨x1x5∨x4x5 = x1x2∨x3∨ (x2∨ x1∨ x4)x5.                                            (5-4)

          Next we compare the two functions f1 and f2. Observe that f1(x) = A ∨ (x2∨x1)x4∨ x5, and f2(x) = A ∨

(x2∨x1∨ x4)x5, where A = x1x2∨ x3. Hence, these two functions differ only in the parts (x2∨x1)x4∨ x5 and (x2∨ x1∨

x4)x5.

          The simplification of the disjunctive normal form (DNF) expressions in (5-1) to (5-4) allowed us to

exclude some conjunctions, which are not minimal lower units.                  For instance, in (5-2) the term y1y4 is not a

maximal lower unit, because y1 covers it. Thus, the right hand side parts in expressions (5-1) to (5-4) form the

minimal DNFs.

          Next we present a superposition of expressions (5-1) to (5-4) for f1 and f2 as the proposed final formulas

for performing biopsy:

                  f1(x) = x2x4∨ x1x2∨ x1x4 ∨ x3∨ x5 =

                        = (y2∨ y1∨ y3y4y5)x4 ∨(w2∨ w1w3)(y2∨y1∨y3y4y5) ∨(w2∨w1w3)x4∨x3∨x5,                             (5-5)

and for cancer:

                  f2(x) = x1x2∨ x3∨ x2x5∨ x1x5∨ x4x5 =

                        = (w2∨w1w3)(y2∨y1∨y3y4y5) ∨x3∨(y2∨y1∨ y3y4y5)x5∨ (w2∨ w1w3)x5∨ x4x5.                           (5-6)

          Totally we needed 13 + 13 + 4 = 30 calls to restore f1 as a superposition

                  fi(x1,x2,x3,x4,x5) = fi(ϕ(w1,w2,w3),ψ(y1,y2,y3,y4,y5),x2,x3,x4,x5)).

The same 30 calls were also needed to restore f2 independently. In reality, however, we used 30 + 13 = 43 calls

instead of 60 calls to restore both functions due to the fact that the component functions ϕ and ψ are the same for

the target functions of both problems. For the record, we spent for the dialogue with the medical expert no more

than one hour.




                                                                21
At this point it is interesting to observe that if one wishes to restore the non monotone function which

corresponds to the concept: "biopsy and not cancer", then this can be achieved easily as follows. First note that

the above concept represents cases in which surgery can potentially be avoided. This concept can be presented

with the composite formula: f1& f2, and thus it can be computed by using expressions (5-5) and (5-6). Therefore,

a total number of 43 calls is just sufficient to restore this function (which is both non monotone and very complex

mathematically).



5.3.      Independent Search of Functions

          Table 1 represents the dialogue which was executed based on the algorithm for problem 1 (section 3).

The information in table 1 resulted to the formation of formulas (5-2) to (5-6).        The first column of table 1

designates binary vectors (examples): the first number indicates a Hansel chain and the second number indicates

the location of the vector within that chain. The second column presents binary vectors. The third, forth and fifth

columns present values of the functions f1, f2 and ψ, respectively.    Asterisks "*" show values obtained directly

from the expert.     Values without asterisks were inferred by using the previous values and the property of

monotony.

          The sixth column shows indexes of vectors for which we can use monotony if a given function

evaluates to 1 (i.e., true value). For instance, for vector #7.1 we have f1(00100) = 1 (as given by the expert), hence

we can also set (by utilizing the monotony property) f1(00101) = 1, because vector (00101) is vector #7.2. We

can also do the same with vector #10.4.

          Similarly, the seventh column represents the indexes of vectors for which we can use monotony if a

given function evaluates to 0 (i.e., false value). For instance, for the vector #6.1 we have: f1(00010) = 0 (as given

by the expert) and thus we can infer (by utilizing monotony) that f1(00000)=0, where (00000) is vector #10.1.

          The values in column 3 are derived by "up-down sliding" in table 1 according to the following six steps:



Step 1.   Begin from the first vector #1.1 (i.e., (01100)).

          Ask the expert about the value of f1(01100).

                Action: In this case the expert reported that: f1(01100)=1

Step 2.   Write "1*" next to vector #1.1 (and under column 3). Recall that an asterisk denotes an answer directly




                                                         22
provided by the expert. Apparently, if the reply were false (0), then we had to write "0*" in column 3.

          The case of having a false value corresponds to column 7 in table 1.

Step 3.   Go to column 6 to find vectors to extend the true (1) value.

               Action: In this case vectors #1.2, #6.3, and #7.3 should also yield (because of monotony) true

               value (1). Therefore, in column 3 and next to examples #1.2, #6.3, and #7.3 the values of the

               function must be (1) (note that now no asterisk is used).

Step 4.   Go to vector #1.2 (next vector while sliding down). Check whether the value of f1(11100) has already

          been fixed or not.

               If the value of f1(11100) is not fixed (i.e., it is empty), then repeat steps 1-3, above, for this new

               vector (i.e., vector #1.2).

               If f1(11100) is not empty (i.e., it has been already fixed), then go to the next vector while sliding

               down (i.e., move to vector #2.1).

               (Note that if the value has not been fixed yet, then we will denote it by fi(x) = e; for empty.)

               Action: f1(11100)me. Therefore, extend the values of the function for the vectors #6.4 and #7.4.

                     and go to the next vector in the table (i.e., move to vector #2.1).

Step 5.   The next vector is #2.1. Continue as above with step 4 but now we have vector #2.1 (instead of vector

          #1.2).



The above procedure is repeated until all vectors and functions have been covered. The interpretation of what

happens in columns 8 to 11 is provided in the next section.

          In order one to construct formula (5-3) (which was shown in section 5.2) one needs to concentrate in the

information depicted in columns 2 and 3 in table 1. One needs to take the first vector marked with "1*" in each

one of the chains and construct for each of these vectors a conjunction of non-zero components. For instance, for

the vector (01010) in chain 2 the corresponding conjunction is x2x4. Similarly, from chain 6 we have taken the "1"

components in the vector (00110) and formed the conjunction x3x4.          The formulas (5-1), (5-2) and (5-4) were

obtained in a similar manner.



5.4.      Sequential Search for Nested Functions




                                                         23
Next we show how it is possible to further decrease the number of calls using a modification of the

algorithm described for solving problem 2.       Recall that the functions f1 and f2 are nested because E2+       E1+.

That is, for any input example x the following is true: f2(x)  f1(x). The last statement means that if we recognize

cancer, then we recognize the necessity of biopsy too. The property f2(x)  f1(x) means that

           if f2(x)=1, then f1(x)=1                                                                              (5-7)

and        if f1(x)=0, then f2(x)=0.                                                                             (5-8)

The above realization permits to avoid calls to the expert for determining the right hand sides of (5-7) and (5-8), if

one knows the values of the left hand side.

           The above idea can be used to make the dialog with the expert more efficient. This was done in this

experiment and is described in table 1. Column 8 in table 1 shows the results of using the values of f2 (column 4)

and property (5-7) to restore function f1. For instance, for vector #6.2 according to the expert we have f2(00110)=1.

Hence, by using (5-7) we should also have f1(00110)=1 without an additional call to the expert.            Column 9

represents the same situation for f2, if one knows the expert's answers for f1. In this case we use the pair <relation

(5-8), column (4)> instead of the pair <relation (5-7), column 5>, as was the case before. Note that the asterisks

"*" in columns 8 and 9 show the necessarily needed calls. In order to restore function f1 by using information

regarding values of f2 we asked the expert 7 times, instead of the 13 calls we had to use for the independent

restoration of function f1.   That is, now we were able to use about 2 times less calls. In particular, the value

f1(10001)=1 for vector #9.2 was inferred from the fact f2(10001)=1. However, to restore function f2 by using f1 we

asked the expert 13 times. That is, we asked him as many times as during the independent restoration of f2 (i.e.,

the nested approach was not beneficial in this case). This should not come as surprise because the limits described

in the previous sections are upper limits. That is, on the average the sequential search is expected to outperform a

non-sequential approach, but cases like the last one can still be expected.

           The previous analysis shows that, in this particular application, the most effective way was at first to

restore f2 with 13 calls and next to use this function values to restore f1, which required only 7 additional calls.

Restoration of f2 by using information of values of function f1 required 13 calls, and to restore both functions in E5

would had required 13 + 13 = 26 calls. In the sequence of restoration <f2 , f1> the total amount of calls to restore

both functions is 13 + 7 = 20 calls in comparison with 13 + 13=26 calls for independent restoration. We should

also add to both cases 13 + 4 calls which were needed to restore functions ϕ and ψ. Therefore, in total we needed




                                                         24
20 + 17 = 37 and 26 + 17 = 43 calls, respectively.          Note that we began from 2*2048 and 2*924 calls for both

functions. Our totals of 37 and 43 calls are about 100 times less than the number of non optimized calls (i.e.,

2*2,048) and about 50 times less than the upper limit guaranteed according to Hansel lemma (i.e., the 2*924 calls).

5.5.       A Joint Search Approach for Nested Functions

           Next we study the possibility to decrease the number of calls once more with a joint search approach of

f1 and f2 by using the following switching strategy:

Step 1.    Ask the expert for the value of f2(x1.1) for vector #1.1.
Step 2.    If f2(x1.1)=1, then ask for the first vector of the next chain. That is, ask for the value of f2(x2.1) for #2.1.
           Otherwise, ask for the value of f1(x1.2).
Step 3.    If f1(x1.1)=0, then ask for the value of f1(x1.2) for vector #1.2.
           Otherwise, switch to ask for the value of f2(x1.1).

The generalization of the previous steps for arbitrary xik is done with steps A and B. Step A is for f1 and step B for

f2. These steps are best described as follows:


Step A.    If f1(xi.k)=0 and vector #(i.k) is not the last one in the current chain and f1(xi,k+1)=e (i.e., empty), then ask
           the expert for the value of f1(xi.k+1). Otherwise, ask for the value for the first vector xi+1.j from the next
           chain such that f1(xi+1.j) = e.
                 If f1(xi.k) = 1 and f2(xi,k) = e, then ask for the value of f2(xi,k).
                 If f1(xi.k) = 1 and f2(xi,k) = 0, then ask for the value of f2(y), where y is the first vector from the same
                 or the next chain such that f2(y)=e.
Step B.    If f2(xi.k) = 1, then ask for the first vector y of the next chain such that f2(y) = e.
           If f2(xi.k) = 0 and f1(xi.k) = e, then ask the expert for the first vector y such that f1(y)=e.


           The results of applying this strategy for restoring the two functions f1 and f2 are presented in table 1,

columns 10 and 11, respectively. The number of calls to restore both f1 and f2 is equal to 22 (see table 1, columns

10 and 11) by asking about values of f1 8 times and about values of f2 14 times (see also values marked with "*"

in columns 10 and 11). Note that the previous algorithm required 22 calls.

           Next in tables 2 and 3 we summarize the various results for interviewing the expert under the different

strategies. Table 2 represents numbers of calls for different kinds of searches for the functions f1, f2, ψ in E5 and for

ϕ in E3. Table 3 represents results for E11. These results summarize the numbers of calls needed for the full

restoration of f1 and f2 as functions of 11 binary variables at level 2.         In particular, note that the independent

search of f1 required 13 calls in E5, 13 calls for ψ and 4 calls for ϕ, i.e., the 30 calls shown in table 3. The sequential

search <f2 , f1> required the same number of calls for f2, but only 7 calls for f1, because we used the same           and ψ

functions found for f1 (see also tables 2 and 3).



                                                             25
The total amount shown in column "f1 ,f2" (i.e., under column 5) represents the number of calls to restore

both functions. For the last four hierarchical searches in this column we excluded the non necessary calls for the

second restoration of ψ and ϕ, i.e., the 17 calls. For instance, independent search required 30 calls to restore both

f1 and f2 in E5 ,i.e., a total 43 calls in E11, as shown in this column.

             Next, let us describe the meaning of indexes 1 and 2 in tables 2 and 3. We denote the upper limit of calls

for non optimized search as N1 and the analogous amounts for other ways of the search for both functions (columns

5) as Ni (i=2,3,4,5,6). In these terms Index1=N1/Ni (i=2,3,4,5,6) and index2=N2/Ni (i=3,4,5,6). In particular index1

shows that we used 3.2 times less calls than N1 and 2.0 times less calls than N2 in E5, and also 110.7 times less

than N1 and 49.9 times less calls than N2 in E11.          These interactive experiments demonstrate the potential for

achieving significantly high efficacy of the proposed approach for interactive restoration of monotone Boolean

functions.



             Table 2. Comparison of results in E5 and E3.
#         ways of search                      f1    f2    f1, f2     index1 index2   ϕ   ψ

1              2                              3      4      5          6      7      8   9
1 Non-optimized search (upper limit)          32    32     64          1      -      8   32
2 Optimal search (upper limit)                20    20     40         1.6     1      6   20
3 Independent search                          13    13     26         2.5    1.5     4   13
4 Sequential search <f1, f2>                  13    13     26         2.5    1.5
5 Sequential search <f2, f1>                  7     13     20         3.2    2.0
6 Joint search                                -      -     22         2.9    1.8




                                                                26
Table 3. Comparison of results in E11.
#            ways of search                       f1      f2    f1, f2   index1    index2
1               2                                    3     4      5         6           7
1 Non-optimized search (upper limit)            2,048 2,048     4,096       1           -
2 Optimal search (upper limit)                  924       924   1,848     2.22          1
3 Independent search                             30       30      43      95.2         42.9
4 Sequential search <f1, f2>                     30       13      43      95.2         42.9
5 Sequential search <f2, f1>                         7    30      37      110.7        49.9
6 Joint search                                       -     -      39      105.0        47.4

6.           COMPUTATIONAL INTERACTIVE HIERARCHICAL EXPERIMENT II.

             We suppose the same hierarchy of binary features and the same functions ϕ and ψ as in the previous

experiment. However, now we consider the same problem with different f1 and f2 functions. We restore fi as a

superposition of the fi, ϕ and ψ functions: fi(x1,x2,x3,x4,x5) = fi(ϕ (w1,w2,w3), ψ(y1,y2,y3,y4,y5), x2, x3, x4, x5)), where

the hidden functions are:

                    f1(x) = x1x2 ∨ x1x3 ∨ x4 ∨ x5,                                                                    (6-1)

and                 f2(x) = x1x2 ∨ x1x3 ∨ x1x4 ∨ x2x4 ∨ x3x4 ∨ x5.                                                    (6-2)

             The offered optimized dialogue to restore f1(x1,x2,x3,x4,x5) really required 12 calls (see table 4, column

3). So we used 1.66 times less calls than Hansel's border (i.e., the 20 calls) and 2.66 times less than in

non-optimized dialogue (i.e., the 32 calls).             The other hidden functions are x1=ϕ(w1,w2,w3)=(w2 ∨ w1w3) and

ψ(y1,y2,y3,y4,y5)=(y1 ∨ y2∨ y3 ∨ y4)

             The dialogue for f2(x1,x2,x3,x4,x5) required 14 calls (see also table 4, column 4). This is by 1.4 times less

than Hansel's limit (of 20 calls) and 2.2 times less in comparison with the non-optimized dialogue (of 32 calls).

Next, we obtained f1 and f2 from table 4 as being the following DNF:

                    f1(x) = x2x4 ∨ x1x2 ∨ x1x4 ∨ x1x3 ∨ x4∨ x3x5 ∨ x2x5 ∨ x1x5 ∨ x5,                                  (6-3)

and                 f2(x) = x2x4∨x1x2∨x1x4∨x1x3∨x3x4∨x2x3x5∨x2x5∨x1x5∨x5.                                             (6-4)

             Simplification of these DNF expressions allowed us to exclude some conjunctions, which are not lower

units and obtain (6-1) and (6-2), respectively.           For instance, in (6-3) x1x5 is not a minimal lower unit, since x5

covers it.




                                                                27
Table 4. Dialogue Sequences for Experiment II.
                                                  Extension
 #        Vector     f1   f2   ψ      1⇒1                  0⇒0      f1~ f2~ f1^ f2^
 1         2         3    4    5      6                     7       8 9 10 11
Chain 1.
1.1      (01100)    0*    0*   1*   1.2;6.3;7.3        6.4;7.4      0* 0      0*     0*
1.2      (11100)    0*    0*   1    6.4;7.4            5.1;3.1      0* 0      0*     0
Chain 2.
2.1      (01010)    1*    1*   1*   2.2;6.3;8.3        6.1;8.1      1    1* 1*       1*
2.2      (11010)    1     1    1    6.4;8.4            3.1          1    1 1         1
Chain 3.
3.1      (11000)    1*    1*   1*   3.2                8.1;9.1      1    1* 1        1*
3.2      (11001)    1     1    1    7.4;8.4            8.2;9.2      1    1 1         1
Chain 4.
4.1      (10010)    1*    1*   1*   4.2;9.3            6.1;9.1      1    1* 1        1*
4.2      (10110)    1     1    1    6.4;9.4            6.2;5.1      1    1 1         1
Chain 5.
5.1      (10100)    1*    1*   1*   5.2                7.1;9.1      1    1* 1        1*
5.2      (10101)    1     1    1    7.4;9.4            7.2;9.2      1    1 1         1
Chain 6.
6.1      (00010)    1*    0*   1*   6.2;10.3           10.1         1*   0*   1*     0*
6.2      (00110)    1     1*   1    6.3;10.4           7.1          1    1    1      1*
6.3      (01110)    1     1    1    6.4;10.5                        1    1    1      1
6.4      (11110)    1     1    1    10.6                            1    1    1      1
Chain 7.
7.1      (00100)    0     0    1*   7.2;10.4           10.1         0    0    0      0
7.2      (00101)    1*    0*   1    7.3;10.4           10.2         1*   0*   1*     0*
7.3      (01101)    1     1*   1    7.4;10.5           8.2;10.2     1    1*   1      1*
7.4      (11101)    1     1    1    5.6                             1    1    1      1
Chain 8.
8.1      (01000)    0     0    1*   8.2                10.1         0    0    0      0
8.2      (01001)    1*    1*   1    8.3                10.2         1    1*   1      1*
8.3      (01011)    1     1    1    8.4                10.5;10.3    1    1    1      1
8.4      (11011)    1     1    1    10.6               9.3          1    1    1      1
Chain 9.
9.1      (10000)    0*    0*   1*   9.2                10.1         0*   0    0*     0*
9.2      (10001)    1*    1*   1    9.3                10.2         1    1*   1      1*
9.3      (10011)    1     1    1    9.4                10.3         1    1    1      1
9.4      (10111)    1     1    1    10.6               10.4         1    1    1      1
Chain 10.
10.1     (00000)    0     0    0*   10.2                            0    0    0      0
10.2     (00001)    1*    1*   0*   10.3                            1*   0    1*     0
10.3     (00011)    1     1    1*   10.4                            1    1*   1      1*
10.4     (00111)    1     1    1    10.5                            1    1    1      1
10.5     (01111)    1     1    1    10.6                            1    1    1      1
10.6     (11111)    1     1    1                                    1    1    1      1
Calls               12    14   12                                   6    10     20




                                                     28
Below we present the superposition of formulas (6-1)-(6-4) for f1 and f2 as the final formulas with :

                 f1(x) = x1x2∨x1x3∨x4∨x5 = (w2∨w1w3)(y1∨y2∨y3∨y4)∨ (w2∨w1w3)x3∨x4∨x5,

           and f2(x)=(w2∨w1w3)(y1∨y2∨y3∨y4)∨ (w2∨w1w3)x3∨ (w2∨w1w3∨y1∨y2∨y3∨y4∨x3)x4∨x5.

In total, we needed 12 + 12 + 4 = 28 calls to restore f1 and 14 + 12 + 4 = 30 calls to restore f2 independently. We

actually used 12 + 14 + 12 + 4 = 42 calls instead of the 58 calls needed to restore both functions.

           We used independent, sequential, and joint search to restore these functions.                Some final results are

presented in tables 4 to 6.      Table 5 represents the numbers of calls for different kinds of searches in E5 for the

functions f1, f2, ψ and in E3 for ϕ. Table 6 represents the results for E11. These tables summarize the numbers of

calls needed for the full restoration of f1 and f2 as functions of 11 binary variables at level 2.               In particular,

independent search of f1 required 12 calls for f1 in E5, 12 calls for ψ and 4 calls for ψ. That is, the 28 calls shown in

table 6. The sequential search <f1, f2> required the same number of calls for f1, but only 10 calls for f2, because we

used the same ϕ and ψ component functions which were found for f1.

           The total amount shown in column labeled "f1, f2" in tables 5 and 6 represents the number of calls

required to restore both functions.        For the last four hierarchical searches in this column we excluded non

necessary second restoration of ψ and ϕ, i.e., 16 calls. For instance, the independent search required 26 calls to

restore both f1 and f2 in E5, i.e., the total 42 calls in E11 shown in this column. Similarly as in the previous section,

index1 shows that we used 3.2 times less calls than N1 and 2.0 times less calls than N2 in E5, and also 113,8 times

less calls than N1 and 51.3 times less calls than N2 in E11.



           Table 5. Comparison of results in E5 and E3.
#         ways of search                     f1    F2        f 1, f 2      index1 index2      ϕ    ψ

1            2                               3     4            5            6        7       8    9
1 Non-optimized search (upper limit)         32    32          64            1        -       8    32
2 Optimal search (upper limit)               20    20          40           1.6       1       6    20
3 Independent search f1 and f2               12    14          26           2.5    1.5        4    12
4 Sequential search <f1, f2>                 12    10          22           2.7    1.8
5 Sequential search <f2, f1>                 6     14          20           3.2    2.0
6 Joint search                                -    -           19           3.2       2.0
           Table 6. Comparison of Results in E11.
#         ways of search                      f1        f2        f1, f2     index1       index2




                                                                 29
1            2                              3      4       5        6         7
1 Non-optimized search (upper limit)      2,048 2,048    4,096      1         -
2 Optimal search (upper limit)             924    924    1,848     2.22       1
3 Independent search f1 and f2             28      30      42      97.5     44.0
4 Sequential search <f1, f2>               28      10      38     107.8     48.6
5 Sequential search <f2, f1>                6      30      36     113.8     51.3
6 Joint search                              -       -      36     113.8     51.3




7.         CONCLUDING REMARKS

           Some computational experiments (see, for instance, (Gorbunov and Kovalerchuk,             1982) and

(Triantaphyllou and Soyster, 1995)) have shown that it is possible to significantly decrease the number of

questions to an oracle in comparison with the full number of questions (which is equal to 2 n) and also in

comparison with a guaranteed pessimistic estimation (formula (2-2) in section 2) for many functions. Some close

form results were also obtained for a connected problem of retrieval of maximal upper zero (Kovalerchuk and

Lavkov, 1984). The results in this paper demonstrate that an interactive approach, based on monotone Boolean

functions, has the potential to be very beneficial to interactive machine learning.



                                                 Acknowledgements

The authors are very grateful to Dr. James F. Ruiz, from the Woman's Hospital of Baton Rouge, LA, for his expert

assistance in formulating the medical example described in this paper.




           REFERENCES

1.         Alekseev, V.B. (1988), "Monotone Boolean Functions". Encyclopedia of Mathematics, v. 6, Kluwer

           Academic Publishers, 306-307.

2.         Blumer A. Ehrenfeucht A. Haussler D. and Warmuth M.K. (1989) Learnability and the Vapnik-

           Chervonenkis dimension, Journal of the Association of Computing Machinery, 36 (4), 929-965.

3.         Bongard, M. (1967), Pattern Recognition, Moscow, "Nauka" Publ. (in Russian, English translation,




                                                          30
1970, by Spartakos Press, NY).

4.    Boros, E., P.L. Hammer, and T. Ibaraki (1994), "Predicting Cause-Effect Relationships from

      Incomplete Discrete Observations." SIAM Journal on Discrete Mathematics, Vol. 7, No. 4, 531-543.

5.    Cohn, D., L. Atlas and R. Ladner (1994), "Improving Generalizing with Active Learning," Machine

      Learning, Vol. 15, 201-221.

6.    Dedekind R., "Ueber Zerlegungen von Zahlen durch ihre grossten gemeinsamen Teiler", Festschrift

      Hoch. Braunschweig, 1987 u.ges. Werke, II. pp. 103-148.

7.    Dietterich, T.C., and R.S. Michalski (1983), "A Comparative Review of Selected Methods for Learning

      from Examples", R.S. Michalski, J.G. Carbonell, and T.M. Mitchell (eds.). Machine Learning: An

      Artificial Intelligence Approach, Tioga Publishing Company, Palo Alto, CA, 41-81.

8.    Fu, L.M. (1993), "Knowledge-Based Connectionism for Revising Domain Theories," IEEE Transactions

      on Systems, Man, and Cybernetics, v. 23, n. 1, 173-182.

9.    Goldman and Sloan R.H.(1992) The Power of Self-Directed Learning, Machine Learning, Vol. 14,

      271-294.

10.   Gorbunov, Yu, and Kovalerchuk, B. (1982), "An Interactive Method of Monotone Boolean Function

      Restoration", Journal of Academy of Science of UzSSR, Engineering, v. 2, 3-6 (in Russian).

11.   Hammer, P.L., and E. Boros (1994), "Logical Analysis: An Overview," RUTCOR Research Report,

      Rutgers University, NJ.

12.   Hansel, G. (1966), "Sur le nombre des fonctions Boolenes monotones den variables". C.R. Acad. Sci.

      Paris, v. 262, n. 20, 1088-1090.

13.   Haussler D. (1988) Quantifying inductive bias: AI learning algorithms and Valiant's learning

      framework, Artificial Intelligence, 36, 177-221.

14.   Haussler D. and Warmuth M.(1993), The Probably Approximately Correct (PAC) and Other Learning

      Models. Chapter in: Foundation of Knowledge Acquisition: Machine Learning, A.L. Meyrowitz and

      S. Chipman (Eds), Kluwer Academic Publishers, Norwell, MA, 291-312.

15.   Hattoru, K. and Y. Torri, "Effective Algorithms for the Nearest Neighbor Method in the Clustering

      Problem," Pattern Recognition, Vol. 26, No. 5, pp. 741-746, 1993.

16.   Kamath, A.P., N.K. Karmakar, K.G. Ramakrishnan, and M.G.C. Resende (1992), "A Continuous




                                                   31
Approach to Inductive Inference," Math. Progr., Vol. 57, 215-238.

17.   Kamath, A.P., N.K. Karmakar, K.G. Ramakrishnan, and M.G.C. Resende (1994), "An Interior Point

      Approach to Boolean Vector Synthesis," Proceedings of the 36-th MSCAS, 1-5.

18.   Kamgar-Parsi, B. and L.N. Kanal (1985), "An Improved Branch-And-Bound Algorithm for Computing

      k-Nearest Neighbors," Pattern Recognition Letters, Vol. 3, 7-12.

19.   Kurita, T. (1991), "An Efficient Agglomerative Clustering Algorithm Using a Heap,"            Pattern

      Recognition, Vol. 24, No. 3, 205-209.

20.   Kleitman, D. (1969), "On Dedekind's problem: the number of monotone Boolean functions". Proc.

      Amer. Math. Soc. 21, 677-682.

21.   Korobkov V.K. (1965) On monotone Boolean functions of algebra logic, In Problemy Cybernetiki,

      v.13, "Nauka" Publ., Moscow, 5-28 (in Russian).

22.   Kovalerchuk, B., and Lavkov, V. (1984), "Retrieval of the maximum upper zero for minimizing the

      number of attributes in regression analysis". USSR Computational Mathematics and Mathematical

      Physics, v. 24, n. 4, 170-175.

23.   Kovalerchuk, B., E. Triantaphyllou, and E. Vityaev, (1995),"Monotone Boolean Functions Learning

      Techniques Integrated with User Interaction." In: Proc. of Workshop "Learning from examples vs.

      programming by demonstration", 12-th International Conference on Machine Learning, Tahoe City,

      U.S.A., 41-48.

24.   Kovalerchuk, B., E. Triantaphyllou, and J.F. Ruiz (1995), "Monotonicity and Logical Analysis of Data:

      A Mechanism of Evaluation of Mammographic and Clinical Data," Technical Report, Louisiana State

      University, Dept. of Ind. Eng., Baton Rouge, LA 70803-6409.

25.   Mangasarian, O.L., W.N. Street, and W.H. Woldberg (1995), "Breast Cancer Diagnosis and Prognosis

      Via Linear Programming," Operations Research, Vol. 43, No. 4, 570-577.

26.   Mitchell T.(1980), The need for biases in learning generalizations, Technical Report CBM-TR-117,

      Rutgers University, New Brunswick, NJ.

27.   Murphy P.M. and Aha D.W.(1994) UCI repository of machine learning databases. Machine-readable

      data repository, Irvine, CA,University of California, Department of Information and Computer Science.

28.   Natarajan B.K. (1989), On learning sets and functions, Machine Learning, 4(1), 123-133.




                                                  32
29.   Rudeanu, S., (1974), "Boolean functions and equations", North-Holland, NY.

30.   Shavlik, J.W. (1994), "Combining Symbolic and Neural Learning,"          Machine Learning, Vol. 14,

      321-331.

31.   Triantaphyllou, E, A.L. Soyster, and S.R.T. Kumara (1994), "Generating Logical Expressions from

      Positive and Negative Examples via a Branch-and-Bound Approach," Computers and Operations

      Research, Vol. 21, No. 2, 185-197.

32.   Triantaphyllou, E.(1994), "Inference of a minimum size Boolean function examples by using a new

      efficient branch-and-bound approach". Journal of Global Optimization, v. 5, n. 1, 69-94.

33.   Triantaphyllou, E. and A. Soyster (1995), "An approach to guided learning of Boolean functions", to

      appear in: Mathematical and Computer Modeling.

34.   Vapnik V.N. (1982) Estimating of Dependencies Based on Empirical Data, Springer-Verlag, New

      York, NY.

35.   Vityaev, E.,   and A. Moskvitin (1993), "Introduction to discovery theory".         Program system:

      DISCOVERY, Logical Methods in Informatics, Computational Systems (Vychislitel'nye sistemy),

      Institute of Mathematics, Russian Academy of Science, Novosibirsk, n. 148, 117-163 (in Russian).

35.   Woldberg, W.W., and O.L. Mangasarian (1990), "Multisurface Method of Pattern Separation for

      Medical Diagnosis Applied to Breast Cytology," Proceedings of the National Academy of Sciences of

      the USA, v. 87, n. 23, 9193-9196.

36.   Yablonskii, S. (1986), "Introduction to discrete mathematics", Moscow, "Nauka" Publ. (in Russian).

37.   Zagoruiko, N. (1979), "Empirical Forecast", "Nauka" Publ., Novosibirsk (in Russian).




                                                  33

Weitere ähnliche Inhalte

Was ist angesagt?

AN IMPLEMENTATION, EMPIRICAL EVALUATION AND PROPOSED IMPROVEMENT FOR BIDIRECT...
AN IMPLEMENTATION, EMPIRICAL EVALUATION AND PROPOSED IMPROVEMENT FOR BIDIRECT...AN IMPLEMENTATION, EMPIRICAL EVALUATION AND PROPOSED IMPROVEMENT FOR BIDIRECT...
AN IMPLEMENTATION, EMPIRICAL EVALUATION AND PROPOSED IMPROVEMENT FOR BIDIRECT...
ijaia
 
What is Odd about Binary Parseval Frames
What is Odd about Binary Parseval FramesWhat is Odd about Binary Parseval Frames
What is Odd about Binary Parseval Frames
Micah Bullock
 
Numerical_PDE_Paper
Numerical_PDE_PaperNumerical_PDE_Paper
Numerical_PDE_Paper
William Ruys
 
abstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.docabstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.doc
butest
 

Was ist angesagt? (20)

AN IMPLEMENTATION, EMPIRICAL EVALUATION AND PROPOSED IMPROVEMENT FOR BIDIRECT...
AN IMPLEMENTATION, EMPIRICAL EVALUATION AND PROPOSED IMPROVEMENT FOR BIDIRECT...AN IMPLEMENTATION, EMPIRICAL EVALUATION AND PROPOSED IMPROVEMENT FOR BIDIRECT...
AN IMPLEMENTATION, EMPIRICAL EVALUATION AND PROPOSED IMPROVEMENT FOR BIDIRECT...
 
A method for solving unbalanced intuitionistic fuzzy transportation problems
A method for solving unbalanced intuitionistic fuzzy transportation problemsA method for solving unbalanced intuitionistic fuzzy transportation problems
A method for solving unbalanced intuitionistic fuzzy transportation problems
 
SJUT/Mat210/Interpolation/Lagrangian 2013-14S2
SJUT/Mat210/Interpolation/Lagrangian 2013-14S2SJUT/Mat210/Interpolation/Lagrangian 2013-14S2
SJUT/Mat210/Interpolation/Lagrangian 2013-14S2
 
1641
16411641
1641
 
Bay's marko chain
Bay's marko chainBay's marko chain
Bay's marko chain
 
A method for finding an optimal solution of an assignment problem under mixed...
A method for finding an optimal solution of an assignment problem under mixed...A method for finding an optimal solution of an assignment problem under mixed...
A method for finding an optimal solution of an assignment problem under mixed...
 
正準相関分析
正準相関分析正準相関分析
正準相関分析
 
What is Odd about Binary Parseval Frames
What is Odd about Binary Parseval FramesWhat is Odd about Binary Parseval Frames
What is Odd about Binary Parseval Frames
 
Commentz-Walter: Any Better than Aho-Corasick for Peptide Identification?
Commentz-Walter: Any Better than Aho-Corasick for Peptide Identification? Commentz-Walter: Any Better than Aho-Corasick for Peptide Identification?
Commentz-Walter: Any Better than Aho-Corasick for Peptide Identification?
 
Singh gordon-unified-factorization-ecml
Singh gordon-unified-factorization-ecmlSingh gordon-unified-factorization-ecml
Singh gordon-unified-factorization-ecml
 
A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...
A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...
A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...
 
Numerical_PDE_Paper
Numerical_PDE_PaperNumerical_PDE_Paper
Numerical_PDE_Paper
 
Heuristic Sensing Schemes for Four-Target Detection in Time-Constrained Vecto...
Heuristic Sensing Schemes for Four-Target Detection in Time-Constrained Vecto...Heuristic Sensing Schemes for Four-Target Detection in Time-Constrained Vecto...
Heuristic Sensing Schemes for Four-Target Detection in Time-Constrained Vecto...
 
50620130101006
5062013010100650620130101006
50620130101006
 
The transportation problem in an intuitionistic fuzzy environment
The transportation problem in an intuitionistic fuzzy environmentThe transportation problem in an intuitionistic fuzzy environment
The transportation problem in an intuitionistic fuzzy environment
 
Probabilistic Methods Of Signal And System Analysis, 3rd Edition
Probabilistic Methods Of Signal And System Analysis, 3rd EditionProbabilistic Methods Of Signal And System Analysis, 3rd Edition
Probabilistic Methods Of Signal And System Analysis, 3rd Edition
 
A Study on Youth Violence and Aggression using DEMATEL with FCM Methods
A Study on Youth Violence and Aggression using DEMATEL with FCM MethodsA Study on Youth Violence and Aggression using DEMATEL with FCM Methods
A Study on Youth Violence and Aggression using DEMATEL with FCM Methods
 
Joco pavone
Joco pavoneJoco pavone
Joco pavone
 
Compressing the dependent elements of multiset
Compressing the dependent elements of multisetCompressing the dependent elements of multiset
Compressing the dependent elements of multiset
 
abstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.docabstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.doc
 

Andere mochten auch

MS-Word version
MS-Word versionMS-Word version
MS-Word version
butest
 
Machine Learning for Speech
Machine Learning for Speech Machine Learning for Speech
Machine Learning for Speech
butest
 
Project MLExAI: Machine Learning Experiences in AI
Project MLExAI: Machine Learning Experiences in AIProject MLExAI: Machine Learning Experiences in AI
Project MLExAI: Machine Learning Experiences in AI
butest
 
Module-related pages
Module-related pagesModule-related pages
Module-related pages
butest
 
CP2083 Introduction to Artificial Intelligence
CP2083 Introduction to Artificial IntelligenceCP2083 Introduction to Artificial Intelligence
CP2083 Introduction to Artificial Intelligence
butest
 
Download
DownloadDownload
Download
butest
 
Alpaydin - Chapter 2
Alpaydin - Chapter 2Alpaydin - Chapter 2
Alpaydin - Chapter 2
butest
 

Andere mochten auch (7)

MS-Word version
MS-Word versionMS-Word version
MS-Word version
 
Machine Learning for Speech
Machine Learning for Speech Machine Learning for Speech
Machine Learning for Speech
 
Project MLExAI: Machine Learning Experiences in AI
Project MLExAI: Machine Learning Experiences in AIProject MLExAI: Machine Learning Experiences in AI
Project MLExAI: Machine Learning Experiences in AI
 
Module-related pages
Module-related pagesModule-related pages
Module-related pages
 
CP2083 Introduction to Artificial Intelligence
CP2083 Introduction to Artificial IntelligenceCP2083 Introduction to Artificial Intelligence
CP2083 Introduction to Artificial Intelligence
 
Download
DownloadDownload
Download
 
Alpaydin - Chapter 2
Alpaydin - Chapter 2Alpaydin - Chapter 2
Alpaydin - Chapter 2
 

Ähnlich wie Monoton-working version-1995.doc

Be2419772016
Be2419772016Be2419772016
Be2419772016
IJMER
 
Kernel methods in machine learning
Kernel methods in machine learningKernel methods in machine learning
Kernel methods in machine learning
butest
 

Ähnlich wie Monoton-working version-1995.doc (20)

nonlinear_rmt.pdf
nonlinear_rmt.pdfnonlinear_rmt.pdf
nonlinear_rmt.pdf
 
Be2419772016
Be2419772016Be2419772016
Be2419772016
 
F5233444
F5233444F5233444
F5233444
 
G
GG
G
 
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
 
PhysRevE.89.042911
PhysRevE.89.042911PhysRevE.89.042911
PhysRevE.89.042911
 
A stochastic algorithm for solving the posterior inference problem in topic m...
A stochastic algorithm for solving the posterior inference problem in topic m...A stochastic algorithm for solving the posterior inference problem in topic m...
A stochastic algorithm for solving the posterior inference problem in topic m...
 
Numerical Solution and Stability Analysis of Huxley Equation
Numerical Solution and Stability Analysis of Huxley EquationNumerical Solution and Stability Analysis of Huxley Equation
Numerical Solution and Stability Analysis of Huxley Equation
 
A_multi-objective_set_covering_problem_A_case_stud.pdf
A_multi-objective_set_covering_problem_A_case_stud.pdfA_multi-objective_set_covering_problem_A_case_stud.pdf
A_multi-objective_set_covering_problem_A_case_stud.pdf
 
A DERIVATIVE FREE HIGH ORDERED HYBRID EQUATION SOLVER
A DERIVATIVE FREE HIGH ORDERED HYBRID EQUATION SOLVERA DERIVATIVE FREE HIGH ORDERED HYBRID EQUATION SOLVER
A DERIVATIVE FREE HIGH ORDERED HYBRID EQUATION SOLVER
 
A Derivative Free High Ordered Hybrid Equation Solver
A Derivative Free High Ordered Hybrid Equation Solver  A Derivative Free High Ordered Hybrid Equation Solver
A Derivative Free High Ordered Hybrid Equation Solver
 
012
012012
012
 
A Genetic Algorithm To Solve The Timetable Problem
A Genetic Algorithm To Solve The Timetable ProblemA Genetic Algorithm To Solve The Timetable Problem
A Genetic Algorithm To Solve The Timetable Problem
 
A derivative free high ordered hybrid equation solver
A derivative free high ordered hybrid equation solverA derivative free high ordered hybrid equation solver
A derivative free high ordered hybrid equation solver
 
Metaheuristic Optimization: Algorithm Analysis and Open Problems
Metaheuristic Optimization: Algorithm Analysis and Open ProblemsMetaheuristic Optimization: Algorithm Analysis and Open Problems
Metaheuristic Optimization: Algorithm Analysis and Open Problems
 
Free Lunch or No Free Lunch: That is not Just a Question?
Free Lunch or No Free Lunch: That is not Just a Question?Free Lunch or No Free Lunch: That is not Just a Question?
Free Lunch or No Free Lunch: That is not Just a Question?
 
Approaches To The Solution Of Intertemporal Consumer Demand Models
Approaches To The Solution Of Intertemporal Consumer Demand ModelsApproaches To The Solution Of Intertemporal Consumer Demand Models
Approaches To The Solution Of Intertemporal Consumer Demand Models
 
Kernel methods in machine learning
Kernel methods in machine learningKernel methods in machine learning
Kernel methods in machine learning
 
syllabus-CBR.pdf
syllabus-CBR.pdfsyllabus-CBR.pdf
syllabus-CBR.pdf
 
Master of Computer Application (MCA) – Semester 4 MC0079
Master of Computer Application (MCA) – Semester 4  MC0079Master of Computer Application (MCA) – Semester 4  MC0079
Master of Computer Application (MCA) – Semester 4 MC0079
 

Mehr von butest

EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBE
butest
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同
butest
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
butest
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jackson
butest
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
butest
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
butest
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer II
butest
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
butest
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.doc
butest
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1
butest
 
Facebook
Facebook Facebook
Facebook
butest
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...
butest
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...
butest
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENT
butest
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.doc
butest
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.doc
butest
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.doc
butest
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!
butest
 

Mehr von butest (20)

EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBE
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jackson
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer II
 
PPT
PPTPPT
PPT
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.doc
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1
 
Facebook
Facebook Facebook
Facebook
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENT
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.doc
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.doc
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.doc
 
hier
hierhier
hier
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!
 

Monoton-working version-1995.doc

  • 1. INTERACTIVE LEARNING OF MONOTONE BOOLEAN FUNCTIONS Last revision: October 21, 1995 BORIS KOVALERCHUK*, EVANGELOS TRIANTAPHYLLOU*1, ANIRUDDHA A. DESHPANDE*, AND EUGENE VITYAEV** * Department of Industrial and Manufacturing Systems Engineering, 3128 CEBA Building, Louisiana State University, Baton Rouge, LA 70803-6409, U.S.A. E-mail: borisk@unix1.sncc.lsu.edu, ietrian@lsuvm.sncc.lsu.edu, or adeshpa@unix1.sncc.lsu.edu ** Institute of Mathematics, Russian Academy of Science, Novosibirsk, 630090, Russia. E-mail: vityaev@math.nsk.su Abstract: This paper presents some optimal interactive algorithms for learning a monotone Boolean function. These algorithms are based on the fundamental Hansel lemma (Hansel, 1966). The advantage of the algorithms is that they are not heuristics, as is often the case of many known algorithms for general Boolean functions, but they are optimal in the sense of the Shannon function. This paper also formulates a new problem for the joint restoration of two nested monotone Boolean functions f1 and f2. This formulation allows further decreasing the dialogue with an expert and restore non-monotone functions of the form f2& f1. The effectiveness of the proposed approach is demonstrated by some illustrative computational experiments related to engineering and medical applications. Key Words. Learning from Examples, Boolean Functions, Monotone Boolean Functions, Shannon Function. 1. INTRODUCTION An interesting problem in machine learning is the one which involves the inference of a Boolean function from collections of positive and negative examples. This is also called the logical analysis problem (Hammer and Boros, 1994) and is a special case of inductive inference. This kind of knowledge extraction is desirable when one is interested in deriving a set of rules which, in turn, can be easily comprehended by a field expert. In many application domains the end users do not have sophisticated computer and modeling expertise. As result, systems which are based on techniques such as neural networks, statistics, or linear programming, are not appealing to them nor these methods provide a plausible explanation of the underlying decision-making process. On the other hand, a logical analysis approach, when it is applicable, can result in rules which are already known to the end user (thus increasing his/her confidence on the method) or lead to new discoveries. In other words, a logical analysis 1 The first two authors gratefully acknowledge the support from the Office of Naval Research (ONR) grant N00014-95-1-0639. 1
  • 2. approach has the distinct advantage of high comprehensibility when it is compared with other methods and to the discovery of new explicit knowledge. The most recent advances in distinguishing between elements of two pattern sets can be classified into six distinct categories. These are: a clause satisfiability approach to inductive inference by Kamath et al. (1992, 1994); some modified branch-and-bound approaches of generating a small set of logical rules by Triantaphyllou et al. (1994), and Triantaphyllou (1994); some improved polynomial time and NP-complete cases of Boolean function decomposition by Boros et al. (1994); linear programming approaches by Wolberg and Mangasarian (1990), and Mangasarian (1995); some knowledge based learning approaches by combining symbolic and connectionist (neural networks) machine based learning as proposed by Shavlik (1994), Fu (1993), Goldman et al. (1994) and Cohn et al. (1994) and finally, some nearest neighbor classification approaches by Hattoru and Torii (1993), Kurita (1991), Kamgar-Parsi and Kanal (1985). From the above six categories, the first three can be considered as logical analysis approaches, since they deal with inference of Boolean functions. The general approach of pure machine learning and inductive inference includes the following two steps: (i) obtaining in advance a sufficient number of examples (vectors) for different classes of observations, and (ii) formulation of the assumptions about the required mathematical structure of the example population (see, for instance, (Bongard, 1966),, Zagoruiko (1979), (Dietterich and Michalski, 1983), and (Vityaev and Moskvitin, 1993)). Human interaction is used just when one obtains new examples and formulates the assumptions. The general problem of learning a Boolean function has many applications. Such applications can be found in the areas of medical diagnosis, hardware diagnosis, astrophysics and finance among others as it is best demonstrated by the plethora of databases in the Machine Learning Repository in the University of California, at Irvine (Murphy and Aha, 1994). However, traditional machine learning approaches have some difficulties. In particular, the size of the hypothesis space is influential in determining the sample complexity of a learning algorithm. That is, the expected number of examples needed to accurately approximate a target concept. The presence of bias in the selection of a hypothesis from the hypothesis space can be beneficial in reducing the sample complexity of a learning algorithm (Mitchell, 1980), (Natarajan, 1989) and (Goldman and Sloan, 1992). Usually the amount of bias in the hypothesis space H is measured in terms of the Vapnik-Chervonenkis dimension, denoted as VCdim(H), (Vapnik, 1982) and (Haussler, 1988). Theoretical results regarding the VCdim(H) are well known (Vapnik, 1982). The results in (Vapnik, 1982) are still better than some other bounds given in (Blumer, 1989). 2
  • 3. However, all these bounds are still overestimates (Haussler and Warmuth, 1993). The learning problem examined in this paper is how one can infer a Boolean function. We assume that initially some Boolean vectors (input examples) are available. These vectors are defined in the space {0,1} n, where n is the number of binary attributes or atoms. Each such vector represents either a positive or a negative example, depending on whether it must be accepted or rejected by the target Boolean function. In an interactive environment we assume that the user sarts with an initial set (which may be empty) of positive and negative examples and then he/she asks an oracle for membership classification of new examples which are selected according to some interactive guided learning strategy. In (Triantaphyllou and Soyster, 1995) there is a discussion of this issue and a guided learning strategy for general Boolean functions is presented and analyzed. The main challenge in inferring a target Boolean function from positive and negative examples is that the user can never be absolutely certain about the correctness of the inferred function, unless he/she has used the entire set of all possible examples which is of size 2n. Apparently, even for a small value of n, this task may be practically impossible to realize. Fortunately, many real life applications are governed by the behavior of a monotone system or can be described by a combination of a small number of monotone systems. Roughly speaking, monotonicity means that the value of the output increases or decreases when the value of the input increases. A formal definition of this concept is provided in the next section. This is common, for example in many medical applications, where for instance, the severity of a condition directly depends on the magnitude of the blood pressure and body temperature within certain intervals. In machine learning monotonicity offers some unique computational advantages. By knowing the value of certain examples, one can easily infer the values of more examples. This, in turn, can significantly expedite the learning process. This paper is organized as follows. The next section provides some basic definitions and results about monotone Boolean functions. The third section presents some key research problems with highlights of some possible solution approaches. The forth section describes the procedure for inferring a monotone Boolean function. The fifth and sixth sections illustrate the previous procedures in terms of two applications. Finally, the paper ends with some concluding remarks and suggestions for possible extensions. 3
  • 4. 2. SOME BASIC DEFINITIONS AND RESULTS ABOUT MONOTONE BOOLEAN FUNCTIONS Let En denote the set of all binary vectors of length n. Let α and β be two such vectors. Then, the vector α = (α1,α2,α3,...,αn) precedes the vector β = (β1,β2,β3,...,βn) (denoted as: α ≥ β) if and only if the following is true: α i  βi, for all 1  i  n. If, at the same time: α ≠ β, then it is said that α strictly precedes β (denoted as: α > β). The two binary vectors α and β are said to be comparable if one of the relationships α ≥ β or β ≥ α holds. A Boolean function f(x) is monotone if for any vectors α,β ∈ En, the relation f(α) ≥ f(β) follows from the fact that α ≥ β. Let Mn be the set of all monotone Boolean functions defined on n variables. A binary vector α of length n is said to be an upper zero of a function f(α) ∈ Mn, if f(α) = 0 and, for any vector β such that β ≤ α, we have f(β) = 0. Also, we shall call the number of unities (i.e., the number of the "1" elements) in vector α as its level and denote this by U(α). An upper zero α of a function f is said to be the maximal upper zero if we have U(β)U(α) for any upper zero β of the function f [Kovalerchuk, Lavkov, 1984]. Analogously we can define the concepts of lower unit and minimal lower unit. A binary vector α of length n is said to be a lower unit of a function f(α) ∈Mn, if f(α) = 1 and, for any vector β from En such that β ≥ α, we get f(β) = 1. A lower unit α of a function f is said to be the minimal lower unit if we have U(α) ≤ U(β)for any lower unit β of the function f.  Examples of monotone Boolean functions are: the constants 0 and 1, the identity function f(x) = x, the disjunction (x1 ∨ x2), the conjunction (x1 ∧ x2), etc. Any function obtained by a composition of monotone Boolean functions is itself monotone. In other words, the class of all monotone Boolean functions is closed. Moreover, the class of all monotone Boolean functions is one of the five maximal (pre-complete) classes in the set of all Boolean functions. That is, there is no closed class of Boolean functions, containing all monotone Boolean functions and distinct from the class of monotone functions and the class of all Boolean functions. The reduced disjunctive normal form (DNF) of any monotone Boolean function, distinct of 0 and 1, does not contain negations of variables. The set of functions {0, 1, (x1 ∨ x2), (x1 ∧ x2)} is a complete system (and moreover, a basis) in the class of all monotone Boolean functions (Alekseev, 1988). For the number ψ(n) of monotone Boolean functions depending on n variables, it is known that:  n ψ (n) = 2  n/2   (1 + ε ( n ) )     (2-1) where 0 < ε(n) < c(logn)/n and c is a constant (see, for instance, (Kleitman, 1969), and (Alekseev, 1988)). Let a monotone Boolean function f be defined with the help of a certain operator Af (also called an oracle) which when fed with a vector α = (α1,α2,α3,...,αn), returns the value of f(α). Let F = {F} be the set of all algorithms which can 4
  • 5. solve the above problem and ϕ(F, f) be the number of accesses to the operator Af required to solve a given problem about inferring a monotone function f∈Mn. Next, we introduce the Shannon function ϕ(n) as follows (Korobkov, 1965): ϕ( n ) = min max ϕ( F, f) . (2-1) F F∈ f ∈M n The problem examined next is that of finding all maximal upper zeros (lower units) of an arbitrary function f∈Mn with the help of a certain number of accesses to the operator Af. It is shown in (Hansel, 1966) that in the case of this problem the following relation is true (known as Hansel's theorem or Hansel's lemma):  n  n ϕ( n ) =  + . (2-2)       n / 2     n / 2  +1  Here n/2 is the floor (the closest integer number to n/2 which is no greater than n/2). In terms of machine learning the set of all maximal upper zeros represents the border elements of the negative pattern. In an analogous manner the set of all minimal lower units to represent the border of a positive pattern. In way a monotone Boolean function represents two "compact patterns". It has been shown (Hansel, 1966) that restoration algorithms for monotone Boolean functions which use Hansel's lemma are optimal in terms of the Shannon function. That is, they minimize the maximum time requirements of any possible restoration algorithm. It is interesting to note at this point that, to the best of our knowledge, Hansel's lemma has not been translated into English, although there are numerous references of it in the non English literature (Hansel wrote his paper in French). This lemma is a one of the final results of the long term efforts in Monotone Boolean functions beginning from Dedekind (1897, in German). One approach to use monotonicity in combination with existing pattern recognition/classification approaches is to consider each one of the available points and apply the concept of monotonicity to generate many more data. However, such an approach would have to explicitly consider a potentially humongous set of input observations. On the other hand, these classification approaches are mostly NP-complete and very CPU time consuming. Thus, the previous approach would be inefficient. In (Gorbunov and Kovalerchuk, 1982) an approach for restoring a single monotone Boolean function is presented. That approach implicitly considers all derivable data from a single observation by utilizing Hansel chains of data (Hansel, 1966). That algorithm is optimal in the sense of the Shannon function. However, although 5
  • 6. this result has also been known in the non English literature (it was originally published in Russian), to the best of our knowledge, it has not yet been described in the English literature. We will call a monotone Boolean function to be an increasing (isotone) monotone Boolean function in contrast with a decreasing monotone Boolean function. A Boolean function is decreasing (antitone) monotone, if for any vectors α,β∈En, the relation f(α) ≥  f(β) follows from the fact that α ≤ β (Rudeanu, 1974, p.149). Each general discrimination (i.e., a general Boolean function) can be described in terms of several increasing gi(x1, ...,xn) and decreasing hi(x1, ...,xn) monotone Boolean functions (Kovalerchuk et al., 1995). That is, the following is always true: ∨ ( g ( x )∧ h ( x ) ). m q( x ) = j j (2-3) j=1 Next, let us consider the case in which q(x) = g(x) ∧ h(x). Here: q+ = g+ ∩ h+, where: q+ = {x: q(x)=1}, g+ = {x: g(x)=1}, and h+ = {x: h(x)=1}. Therefore, one can obtain the set of all positive examples for q as the intersection of the sets of all positive examples for the monotone functions g and h. For a general function q(x), represented as in (2-3), the union of all these intersections gives the full set of positive examples: q+ = ∪q+j = ∪ (g+j ∩ h+j). Often, we do not need so many separate monotone functions. The union of all conjunctions, which do not include negations i forms a single increasing monotone Boolean function (see, for instance, (Yablonskii, 1986) and (Alekseev, 1988)). 3. SOME KEY PROBLEMS AND ALGORITHMS In this section we present some key problems and the main steps of algorithms for solving them. PROBLEM 1: (Inference of a monotone function with no initial data) Conditions: There are no initial examples to initiate learning. All examples should be obtained as a result of the interaction of the designer with an oracle (i.e., an operator Af). It is also required that the discriminant function should be a monotone Boolean function. This problem is equivalent to the requirement that we consider only two compact monotone patterns. These conditions are natural in many applications. Such applications include the estimation of 6
  • 7. reliability ( see also the illustrative examples next in this section for problem 2). Algorithm A1: Step 1: The user is asked to confirm function monotony. Step 2: Apply an iterative algorithm for generating examples and construction of the DNF representation. Do so by using Hansel's lemma (to be described in section 4, see also (Hansel, 1966) and (Gorbunov and Kovalerchuk, 1982)). This algorithm is optimal according to relations (2-1) and (2-2) in section 2. PROBLEM 2: (The connected classification problem) Conditions: We should learn to classify an arbitrary vector as a result of the interaction of the designer with an oracle (i.e., an operator Af). The discriminant functions should be a combination of some monotone Boolean functions. There are some initial examples for learning for the connected classification problems simultaneously, with monotony supposition for both of them and the supposition that the positive patterns are nested. Formally, the above consideration means that for all α ∈ En, the following relation is always true: f2(α)  f1(α) (where f1 and f2 are the discriminant monotone Boolean functions for the first and the second problems, respectively). The last situation is more complex than the previous one. However, the use of additional information from both problems allows for the potential to accelerate the rate of learning. Some Examples of Nested Problems First illustrative example: The engineering reliability problem For illustrative purposes consider the problem of classifying the states of some system by a reliability related expert. This expert is assumed to have worked with this particular system for a long term and thus can serve as an "oracle" (i.e., an operator denoted as Af). States of the system are represented by binary vectors from En (binary space defined on n 0-1 attributes or characteristics). The "oracle" is assumed that can answer questions such as: "Is reliability of a given state guaranteed?" (Yes/No) or: "Is an accident for a given state guaranteed?" (Yes/No). In accordance with these questions, we pose two interrelated nested classification tasks. 7
  • 8. The first one is for answering the first question while the second task is for answering the second question. Next, we define the four possible patterns which are possible in this situation. Task 1: Pattern 1.1: "Guaranteed reliable states of the system" (denoted as E+1). Pattern 1.2: "Reliability of the states of the system is not guaranteed" (denoted as E—1). Task 2: Pattern 2.1: "States of the system with some possibility for normal operation" (denoted as E+2). Pattern 2.2: "States of the system which guarantee an accident" (denoted as E—2). Our goal is to extract the way the system functions in the form of two discriminant Boolean functions f2 and f1. The first function is related to task 1, while the second function is related to task 2 (as defined above). Also observe, that the following relations must be true: E+2 ⊃ E+1 and f2(α)  f1(α) for all α∈ En, describing the system state, where f1(α) and f2(α) are the discriminant monotone Boolean functions for the first and second tasks, respectively. Second illustrative example: Breast cancer diagnosis For the second illustrative example consider the following nested classification problem related to breast cancer diagnosis. The first sub-problem is related to the clinical question of whether a biopsy or short term follow-up is necessary or not. The second sub-problem is related to the question whether the radiologist believes that the current case is highly suspicious for malignancy or not. It is assumed that if the radiologist believes that the case is malignant, then he/she will also definitely recommend a biopsy. More formally, these two sub-problems are defined as follows: The Clinical Management Sub-Problem: One and only one of the following two disjoint outcomes is possible. 1) "Biopsy/short term follow-up is necessary", or: 2) "Biopsy/short term follow-up is not necessary". The Diagnosis Sub-Problem: Similarly as above, one and only one of two following two disjoint outcomes is possible. That is, a given 8
  • 9. case is: 1) "Highly suspicious for malignancy", or: 2) "Not highly suspicious for malignancy". It can be easily seen that the corresponding states satisfy the nesting conditions of the first illustrative example. Third illustrative example: Radioactivity contamination detection The third illustrative example is also diagnosis related. The issue now is how to perform radioactivity contamination tests. Usually, the more time demanding a test is, the more accurate the result will be. Therefore, an efficient strategy would be to perform the less time demanding tests first, and if need rises, to perform the more time demanding (and also more accurate) tests later (this is analogous to the previous breast cancer problem in which case a biopsy can yield more accurate results but it is also more expensive). The corresponding two nested problems can be defined as follows: Sub-problem 1: Diagnosis of radioactivity contamination, low risk case: Pattern 1: "Surely contaminated" Pattern 2: "Not necessarily contaminated" Sub-problem 2: Very high risk for contamination: Pattern 1: "Extra detection of contamination is necessary" Pattern 2: "Extra detection of contamination is not necessary" Again, it can be easily seen that the corresponding states satisfy the nesting conditions of the first illustrative example. Next the outline of an algorithm for solving this kind of problems is presented. Algorithm A2: Step 1: The user is asked to confirm the monotony for both tasks (i.e., the monotonicity of the functions underlying the patterns related to the previous two tasks). Step 2: Testing of monotony of the initial examples for both tasks. Step 3: Reject the examples which violate monotony. Step 4: Restoration of f(α) values for the elements of Hansel's chains, using monotony property and known 9
  • 10. examples (Hansel, 1966; Gorbunov and Kovalerchuk, 1982), see also section 4. Step 5: Apply a dual iterative algorithm for the additional examples which were generated based on Hansel's lemma. Next, we discuss step 5 in more detail: Step 5.1: Generate the next vector αi1 for the interaction with Af1. Step 5.2: Generate the next vector αi2 for the interaction with Af2 in the same way as for Af1. Step 5.3: Estimation of the numbers of vectors (N1,1j) for which we can compute f1(α) and f2(α) without asking oracles Af1 and Af2 by temporarily assuming that fj(αi1) = 1 (for j = 1,2), if f2(αi1) = e, where e stands for empty value. Step 5.4: Estimation of the numbers of vectors (N1,0j) for which we can compute f1(α) and f2(α) without asking oracles Af1 and Af2 by temporarily assuming that fj(αi1) = 0 (for j = 1,2), if f2(αi1) = e. Step 5.5: Estimation of the numbers of vectors (N2,1j) for which we can compute f1(α) and f2(α) without asking oracles Af1 and Af2 by temporarily assuming that fj(αi2) = 1 (for j = 1,2), if f1(αi2) = e. Step 5.6: Estimation of the numbers of vectors (N2,0j) for which we can compute f1(α) and f2(α) without asking oracles Af1 and Af2 by temporarily assuming that fj(αi2) = 0 (for j = 1,2), if f1(αi2) = e. Step 5.7: Choose the variant with the maximal number of vectors from steps 5.3-5.6 in order to choose which example, α1 or α2, should be sent for the correct classification by the appropriate oracle. Comment: In step 5.7 we realize a local algorithm. This means that we consider just suppositions about fj(αi1) and fj(αi2). We do not simultaneously consider further suppositions about fj(αi+1,1), fj(αi+1,2), fj(αi+2,1), fj(αi+2,2), etc. for the next possible results of interaction. However, this possible extension of algorithm A2 may decrease the number of interactions, but it also leads to more computations. 4. AN ALGORITHM FOR RESTORING A MONOTONE BOOLEAN FUNCTION 4.1. General scheme of the algorithm. Next we present algorithm RESTORE for the interactive restoration of a monotone Boolean function and two procedures GENERATE and EXPAND in a pseudo programming language. 10
  • 11. ALGORITHM "RESTORE" Input. Dimension n of the binary space and access to an oracle Af (which has to be inferred). Output. A monotone Boolean function restored after a minimal number (according to Shannon criterion as it is given as relation (2-2) in section 2) of calls to the oracle Af. Method. 1) Construction of Hansel chains (see section 4.2) 2) Restoration of a monotone Boolean function starting from chains of minimal length and finishing with chains of maximal length. This ensures that the number of membership calls to the oracle Af is no more than the limit presented in formula (2-2), which guarantees the number of calls to an oracle Af to be no more than the limit presented in formula (2-1). BEGIN ; Set i=1; {initialization} DO WHILE (function f(x) is not entirely restored) Step 1. Use procedure GENERATE to generate element ai; Step 2. Call oracle Af to retrieve the value of f(ai); Step 3. Use procedure EXPAND to deduce the values of other elements in Hansel chains (i.e., sequences of examples in En) by using the value of f(ai ), the structure of element ai and the monotony property. Step 3. Set i e i+1; RETURN END; PROCEDURE "GENERATE": Used to generate an element ai to be classified by the oracle Af. Input. Dimension n of the binary space. Output. The next element to send for classification by the oracle Af. Method: Proceeding from minimal to maximal Hansel chains IF i=1 THEN {where i is the index of the current element} BEGIN Step 1.1. Retrieve all Hansel chains of minimal length; Step 1.2. Randomly choose the first chain c1 among the chains retrieved in step 1.1; Step 1.3. Set the first element a1 as the minimal element of chain c1; END ELSE BEGIN Set k=1 {where k is the index number of a Hansel chain}; DO WHILE (NOT all Hansel chains are tested) Step 2.1. Find the last element ai of chain Ck, which still has no confirmed f(ai) value; Step 2.2. If step 2.1 did not returned an element ai then randomly select the next Hansel 11
  • 12. chain Ck+1 of the same length as the one of the current chain Ck; Step 2.3. Find the least element ai from chain Ck+1, which still has no confirmed f(ai) value; Step 2.4. If Step 2.3 did not return an element ai, then randomly choose chain Ck+1 of the next available length. Go to this chain Ck+1; Step 2.5. Set k e k+1; RETURN END PROCEDURE "EXPAND": Used to expand the f(ai) value for other elements using monotony and Hansel chains. Input. The f(ai) value. Output. An extended set of elements with known f(x) values. Method: The method is based on the properties: if x  ai and f(ai)=1, then f(x)=1; and if x  ai and f(i)=0, then f(x)=0. BEGIN Step 1. Generate a set {x} of comparable elements with ai , where x ≥ai or x≤ai; Step 2. If f(ai)=1, then ∀ x (x ≥ ai) set f(x)=1; If f(ai)=0, then ∀ x (x ≤ ai) set f(x)=0; Step 3. Store the f(x) values which were obtained in step 2; END 4.2. Construction of Hansel Chains Several steps in the previous algorithms deal with Hansel chains. Next we describe how to construct all Hansel chains for a given state space En of dimension n. First, we give a formal definition of a general chain. A chain is a sequence of binary vectors (examples) a1 ,a2 ,...,aq , such that ai+1 is obtained from ai by changing a "0" element to "1". That is, there is an index j such that ai,j = 0, ai+1,j = 1 and for any t≠j the following is true ai,t=ai+1,t. For instance, the following list <01000, 01100, 01110> of three vectors is a chain. To construct all Hansel chains we will use an iterative procedure as follows: Let En ={0,1}n be the n-dimensional binary cube. All chains for En are constructed from chains for En-1. Therefore, we begin the construction for En by starting with E1 and iteratively proceed to En. (i) Chains for E1. For E1 there is only a single (trivial) chain and it is <0, 1>. Note that "0" is the first element (vector of size 1) 12
  • 13. while "1" is the second element. (ii) Chains for E2. We take E1 and add at the beginning of each one of its chains the element "0". Thus, we obtain the set {00, 01}. This set is called E2min. Similarly, by adding "1" to E1 = {0, 1} we construct the set E2max = {10, 11}. Both E2min and E2max are isomorphic to E1, because they have the isomorphic chains <00,01> and <10,11>. The union of E2min and E2max is E2. That is, E2 = E2min ∪ E2max. Observe that the chains <00,01> and <10,11> cover entirely E2 but they are not Hansel chains. To obtain Hansel chains we need to modify them as follows. Adjust the chain <00,01> by adding the maximum element (11) from the chain <10,11>. That is, obtain a new chain <00,01,11>. Also reject element (11) from the chain <10,11>. Hence, we obtain the two new chains: <00,01,11> and <10>. These chains are the Hansel chains (Hansel, 1966) for E2. That is E2 = {<00,01,11>, <10>}. (iii) Chains for E3. The Hansel chains for E3 are constructed in a manner similar to the chains for E2. First, we double and adjust the chains of E2 to obtain E3min and E3max. The following relation is also true: E3 = E3min ∪ E3max, where E3min = {<000,001,011>, <010>} and E3max = {<100,101,111>,<110>}. We do the same chain modification as for E2. That is, first we choose two isomorphic chains. At first, let it be two maximal length chains <000,001,011> and <100,101,111>. We add the maximal element (111) from <100,101,111> to <000,001,011> and drop it from <100,101,111>. In this way we obtain the two new chains: <000,001,011, 111> and <100,101>. Next, we repeat this procedure for the rest of the isomorphic chains <010> and <110>. In this simple case we will have just one new chain <010, 110> (note that the second chain will be empty). Therefore, E3 consists of the three Hansel chains <010,110>, <100,101> and <000,001,011,111>. That is, the following is true: E3 = {<010,110>, <100,101>, <000,001,011,111>}. In the similar manner one can construct the Hansel chains for E4, E5 and so on. Below we present a general scheme for En under the supposition that we have all Hansel chains for En-1. Note that the Hansel chains of En-1 can be obtained recursively from the Hansel chains of E1 , E2 , ..., En-2. 13
  • 14. Chains for En. Suppose that En-1 = {C1 ,...,Ci ,..., Ck}, where each Ci is a Hansel's chain for En-1. First, we double En-1 and by using the "0" and "1" elements as before we construct the two isomorphic chains: Enmin = {0C1, 0C2 ,..., 0Ck} and Enmax = {1C1, 1C2, ..., 1Ck}. The sets Enmin and Enmax entirely cover En. That is, the following is true: En = Enmin ∪ Enmax. Next, let us consider chain Ci = {ci1, ...,cij ..., cim(i)} and the pair of isomorphic chains 0Ci and 1Ci, in which 0Ci = {0ci1, ..., 0cij, ..., 0cim(i)} and 1Ci = {1ci1, ...,1cij ..., 1cim(i)}. Note that in the previous expressions 0cij and 1cij are n-dimensional binary vectors and cij is an n- 1 dimensional vector. We construct a new pair of Hansel chains 0CiH = {0ci1, ..., 0cij, ...,0cim(i), 1cim(i)} and 1CiH = {1ci1, ..., 1cij, ..., 1cim(i)-1} from 0Ci = {0ci1, ...,0cij, ..., 0cim(i)} and 1Ci = {1ci1, ..., 1cij, ..., 1cim(i)} by using the same "cut" and "add" procedure as was the case for E3. We repeat this operation for all pairs of isomorphic chains in Enmin and Enmax to obtain the entire set of Hansel chains for En. Next in figures 2 and 3 we illustrate this procedure of chain construction for E1 and E2. Figure 2 shows the transformation of E1 in E2max and E2min and after that the construction of the new chains from them. The nodes of a chain are connected with solid lines. At first we have an edge between (10) and (11) then we cut it and obtain a new chain <00, 01, 11> by combining (11) to the chain <00, 01>. The node (10) forms a trivial chain with a single element. Figure 3 begins from two copies of the output chain shown on figure 2 with an added "0" to the first of them and an "1" to the second one. The output on figure 3 presents the chains in E3 which are also connected with solid lines. They obtained with the same "cut and add" procedure. The justification of above construction of this section is Hansel lemma (Hansel, 1966).  n  The binary cube En is covered by   disjoint chains with following properties:     n/2   n  n  a) the number of chains of the length n-2p+1 is   −   p   p − 1 , where 0 ≤ p ≤ floor(n/2) [minimal element of      each if this chain is a vector with p units ("1") and n-p zeros ("0")]. b) for each three sequential elements a1<a2<a3 from a chain the complement to a2 in the interval [a1,a3] belongs to 14
  • 15. a chain of the length n-2p-1. The vector b is a complement if it forms a square with a1,a2, a3 (see fig.1). Example of the square: a1=(01000), a2=(01100), a3=(01110) and b=(01010). Fig.1. Binary square. 10----------------11 10- - - - - -11 E2max 0------------1 E1 00----------------01 00----------------01 E2min Fig 2. Construction of Hansel chains for E2 E3max 110----------------111 010 - - - - 110 010-----------011 010 - - - - - 011 100----------------101 100-------------------101 000-----------001 000----------------001 E3min Fig 3. Construction of Hansel chains for E3. 15
  • 16. 5. COMPUTATIONAL INTERACTIVE HIERARCHICAL EXPERIMENT I. 5.1 Problem Description In this section we demonstrate the previous issues in terms of an illustrative example from the area of breast cancer diagnosis. It is assumed that some preliminary medical tests have been performed (such as a mammogram) and the medical doctor wishes to determine what to do next. That is, should he/she proceed with a biopsy or not? The problem considered in this section is a nested one. That is, it is comprised by two interrelated sub-problems. The first problem is related to the question whether a biopsy is necessary. The second problem is related to the question whether the human expert (medical doctor) believes that the current case represents a malignant case or not. It is assumed that if the doctor believes that the case is malignant, then he/she will also definitely recommend a biopsy. Recall, that in section 3 we defined these two sub-problems as follows: The Clinical Management Sub-Problem: One and only one of the following two disjoint outcomes is possible. 1) "Biopsy/short term follow-up is necessary", or: 2) "Biopsy/short term follow-up is not necessary". The Diagnosis Sub-Problem: Similarly as above, one and only one of two following two disjoint outcomes is possible. That is, a given case is: 1) "Highly suspicious for malignancy", or: 2) "Not highly suspicious for malignancy". The previous two interrelated sub-problems can be formulated as the restoration problem of two nested monotone Boolean functions. A medical expert was explained the concept of monotonicity and he felt comfortable with the idea of using monotone Boolean functions. Moreover, the dialogue which followed confirmed the validity of this assumption. The underlying Boolean functions were defined on 11 binary atoms. This simplification was done to illustrate the applicability of the proposed methodology. More details on this problem can be found in (Kovalerchuk, Triantaphyllou and Ruiz, 1995). 16
  • 17. From the above discussion it follows that now we will work in the 11-dimensional binary space (i.e., in E11). The two corresponding functions are f1 and f2. Function f1 returns true (1) value if the decision is "biopsy/short term followup is necessary", false (0) otherwise. Similarly, function f2 returns true (1) value if the decision is "highly suspicious for malignancy", false (0) otherwise. Full restoration of the functions fi (for i = 1,2) without any optimization on the dialogue process would had required up to 211=2,048 calls (membership inquires) to an expert (i.e., the Af operator or "oracle") to restore any Boolean function which is defined on 11 attributes. However, according to the Hansel lemma (i.e., relation (2-2) in section 2) and under the assumption of monotony, an optimal (i.e., with a minimal number of accesses) dialogue  11  11 for restoring a monotone Boolean would require at most   +   = 2 x 462 = 924 1 calls to the oracle. 5 6      Observe that this new value is by 2.2 times smaller than the previous upper limit of 2048 calls. However, even this upper limit of 924 calls can be reduced even further, as it explained in the next section. 5.2. Hierarchical Decomposition In this particular breast cancer diagnosis problem the medical expert indicated that the original 11 binary attributes can be organized in terms of a hierarchy. This hierarchy follows from the definition of these 11 medical oriented binary attributes. Next, it will be shown that this hierarchy can be exploited in reducing the required number of calls to the "oracle" even further. We will not describe the medical justification of this hierarchy here (more details can be found in (Kovalerchuk, Triantaphyllou, and Ruiz, 1995)). The hierarchy used in this example is given as follows: Level 1 (5 attributes) Level 2 (all 11 attributes) x1 → w1, w2, w3 x2 → y1, y2, y3, y4, y5 x3 → x3 x4 → x4 x5 → x5 In this hierarchy the two attributes x1 and x2 correspond to two monotone Boolean functions which also have to be restored. That is, the following is true: x1=ϕ(w1, w2, w3), and x2=ψ(y1, y2, y3, y4, y5). 17
  • 18. The expert indicated that these two functions ϕ and ψ should be common to both problems (i.e., the biopsy and cancer problems). Therefore, the following relation is true regarding the fi (for i = 1, 2) and the two ϕ and ψ functions: fi(x1,x2,x3,x4,x5) = fi(ϕ(w1,w2,w3), ψ(y1,y2,y3,y4,y5), x3,x4,x5)). Given the above analysis and decomposition, then according to Hansel lemma (relation (2-2) in section 2) 5 5 it follows that for the restoration problem at level 1 it would had required at most   +   = 20 2 calls to the  2  2     "oracle". Observe that without the exploitation of the monotone property, an exhaustive dialogue requires 25=32 calls which is by 1.6 times higher. That is, the exploitation of a hierarchy can even benefit the restoration of general (i.e., not necessarily monotone) Boolean functions. The above bounds represent upper limits. When we actually interviewed the medical expert, the target function f1 was fully restored with only 13 membership inquires (calls). This dialogue is presented in table 1, column 3. Table 1 is explained in more detail later in sections 5.3 to 5.5. Therefore, the number of the actual calls was 1.5 times less than the upper limit given by Hansel's lemma and 2.5 times less than the exhaustive search (which would had required up to 32 calls). Also, the same 13 calls were adequate to restore each of f2(x1,x2,x3,x4,x5) and x2=ψ(y1,y2,y3,y4,y5) functions (see also table 1, columns 4 and 5). 18
  • 19. Table 1. Dialogue Sequences for Experiment I. Extension # Vector f1 f2 ψ 1i 1 0 0 f1~ f2~ f1^ f2^ 1 2 3 4 5 6 7 8 9 10 11 Chain 1. 1.1 (01100) 1* 1* 1* 1.2;6.3;7.3 6.4;7.4 1* 1* 1 1* 1.2 (11100) 1 1 1 6.4;7.4 5.1;3.1 1 1 1 1 Chain 2. 2.1 (01010) 1* 0* 1* 2.2;6.3;8.3 6.1;8.1 1* 0* 1* 0* 2.2 (11010) 1 1* 1 6.4;8.4 3.1 1 1* 1 1* Chain 3. 3.1 (11000) 1* 1* 1* 3.2 8.1;9.1 1 1* 1 1* 3.2 (11001) 1 1 1 7.4;8.4 8.2;9.2 1 1 1 1 Chain 4. 4.1 (10010) 1* 0* 1* 4.2;9.3 6.1;9.1 1* 0* 1* 0* 4.2 (10110) 1 1* 1 6.4;9.4 6.2;5.1 1 1* 1 1* Chain 5. 5.1 (10100) 1* 1* 1* 5.2 7.1;9.1 1 1* 1 1* 5.2 (10101) 1 1 1 7.4;9.4 7.2;9.2 1 1 1 1 Chain 6. 6.1 (00010) 0* 0 0* 6.2;10.3 10.1 0* 0 0* 0* 6.2 (00110) 1* 1* 0* 6.3;10.4 7.1 1 1* 1* 1* 6.3 (01110) 1 1 1* 6.4;10.5 1 1 1 1 6.4 (11110) 1 1 1 10.6 1 1 1 1 Chain 7. 7.1 (00100) 1* 1* 0* 7.2;10.4 10.1 1 1* 1 1* 7.2 (00101) 1 1 0* 7.3;10.4 10.2 1 1 1 1 7.3 (01101) 1 1 1* 7.4;10.5 8.2;10.2 1 1 1 1 7.4 (11101) 1 1 1 5.6 1 1 1 1 Chain 8. 8.1 (01000) 0* 0 1* 8.2 10.1 0* 0 0* 0 8.2 (01001) 1* 1* 1 8.3 10.2 1 1* 1 1* 8.3 (01011) 1 1 1 8.4 10.5;10.3 1 1 1 1 8.4 (11011) 1 1 1 10.6 9.3 1 1 1 1 Chain 9. 9.1 (10000) 0* 0 1* 9.2 10.1 0* 0 0* 0* 9.2 (10001) 1* 1* 1 9.3 10.2 1 1* 1* 1* 9.3 (10011) 1 1 1 9.4 10.3 1 1 1 1 9.4 (10111) 1 1 1 10.6 10.4 1 1 1 1 Chain 10. 10.1 (00000) 0 0 0 10.2 0 0 0 0 10.2 (00001) 1* 0* 0 10.3 1* 0* 1* 0 10.3 (00011) 1 1* 0 10.4 1 1* 1 1* 10.4 (00111) 1 1 1 10.5 1 1 1 1 10.5 (01111) 1 1 1 10.6 1 1 1 1 10.6 (11111) 1 1 1 1 1 1 1 Total Calls 13 13 13 7 13 22 19
  • 20. Next x1 was restored as a function ϕ(w1,w2,w3). In this case the exhaustive search would require 23 = 8  1  2  calls. On the other hand, Hansel's lemma provides an upper limit of   +   = 6 3 calls. This value is 1.33  3  3     times less than what the exhaustive search would had required. Although the previous numbers of calls are not excessively high, they still provide a sense of the potential of significant savings which can be achieved if monotony is established and utilized. The numbers in this example are supposed only to be for illustrative purposes in clarifying the proposed methodology. Next, we present the optimal dialogue in restoring the ϕ(w1,w2,w3) function. Chain 1 Chain 2 Chain 3 Vectors: <010, 110> <100, 101> <000, 001, 011, 111> Value of ϕ 1* 1 0* 1* 0 0* 1 1 In the above illustration "1*" and "0*" indicate answers given directly by the expert. For instance, during the dialogue the expert decided that: ϕ(010) = 1, ϕ(101) = 1, and ϕ(100) = ϕ(001) = 0. The values without an asterisk (i.e., when we only have "1" or "0") were obtained by utilizing the monotony property. Recall that these three Hansel chains for E3 were constructed in section 4 (where the chains for E3 were determined). In this experiment the four calls about the function values for the examples (010), (100), (101) and (001) were sufficient to lead to full restoration. Monotony allowed us to extend the previous four values for other examples. For instance, from the expert's testimony that ϕ(010) = 1, it is derived that ϕ(110) = 1, and from ϕ(001)=0, it is derived that ϕ(000) = 0. Observe that these 4 calls to the expert are 2 times less than the maximum number of calls (i.e., 8) and 1.5 times less than the guaranteed number of calls provided by Hansel's lemma (which is equal to 6). The above approach leads to the full definition of the functions ϕ and ψ for level 2 as follows: x1 = ϕ(w1,w2,w3,) = w2 ∨ w1w3 ∨ w2w3 = w2 ∨ w1w3, (5-1) and x2 = ψ(y1,y2,y3,y4,y5) = y1y2∨ y2y3∨ y2y4 ∨ y1y3∨ y1y4 ∨ y2y3y4 ∨ y2y3y5 ∨ y2∨y1∨ y3y4y5 = = y2∨ y1∨ y3y4y5. (5-2) The above Boolean expressions were determined from the information depicted in table 1. This is explained in detail in section 5.3. 20
  • 21. Similarly, table 1 suggests that for level 1 we have the target functions of x1, x2, x3, x4, x5 for the biopsy sub-problem to be defined as follows: f1(x) = x2x3∨ x2x4∨ x1x2∨ x1x4∨ x1x3∨ x3x4∨ x3∨ x2x5∨x1x5∨x5 = = x2x4∨ x1x2∨ x1x4∨x3∨ x5. (5-3) and for the cancer sub-problem to be defined as: f2(x) = x2x3∨x1x2x4∨x1x2∨x1x3x4∨x1x3∨x3x4∨x3∨x2x5∨x1x5∨ x4x5 = = x1x2∨x3∨x2x5∨x1x5∨x4x5 = x1x2∨x3∨ (x2∨ x1∨ x4)x5. (5-4) Next we compare the two functions f1 and f2. Observe that f1(x) = A ∨ (x2∨x1)x4∨ x5, and f2(x) = A ∨ (x2∨x1∨ x4)x5, where A = x1x2∨ x3. Hence, these two functions differ only in the parts (x2∨x1)x4∨ x5 and (x2∨ x1∨ x4)x5. The simplification of the disjunctive normal form (DNF) expressions in (5-1) to (5-4) allowed us to exclude some conjunctions, which are not minimal lower units. For instance, in (5-2) the term y1y4 is not a maximal lower unit, because y1 covers it. Thus, the right hand side parts in expressions (5-1) to (5-4) form the minimal DNFs. Next we present a superposition of expressions (5-1) to (5-4) for f1 and f2 as the proposed final formulas for performing biopsy: f1(x) = x2x4∨ x1x2∨ x1x4 ∨ x3∨ x5 = = (y2∨ y1∨ y3y4y5)x4 ∨(w2∨ w1w3)(y2∨y1∨y3y4y5) ∨(w2∨w1w3)x4∨x3∨x5, (5-5) and for cancer: f2(x) = x1x2∨ x3∨ x2x5∨ x1x5∨ x4x5 = = (w2∨w1w3)(y2∨y1∨y3y4y5) ∨x3∨(y2∨y1∨ y3y4y5)x5∨ (w2∨ w1w3)x5∨ x4x5. (5-6) Totally we needed 13 + 13 + 4 = 30 calls to restore f1 as a superposition fi(x1,x2,x3,x4,x5) = fi(ϕ(w1,w2,w3),ψ(y1,y2,y3,y4,y5),x2,x3,x4,x5)). The same 30 calls were also needed to restore f2 independently. In reality, however, we used 30 + 13 = 43 calls instead of 60 calls to restore both functions due to the fact that the component functions ϕ and ψ are the same for the target functions of both problems. For the record, we spent for the dialogue with the medical expert no more than one hour. 21
  • 22. At this point it is interesting to observe that if one wishes to restore the non monotone function which corresponds to the concept: "biopsy and not cancer", then this can be achieved easily as follows. First note that the above concept represents cases in which surgery can potentially be avoided. This concept can be presented with the composite formula: f1& f2, and thus it can be computed by using expressions (5-5) and (5-6). Therefore, a total number of 43 calls is just sufficient to restore this function (which is both non monotone and very complex mathematically). 5.3. Independent Search of Functions Table 1 represents the dialogue which was executed based on the algorithm for problem 1 (section 3). The information in table 1 resulted to the formation of formulas (5-2) to (5-6). The first column of table 1 designates binary vectors (examples): the first number indicates a Hansel chain and the second number indicates the location of the vector within that chain. The second column presents binary vectors. The third, forth and fifth columns present values of the functions f1, f2 and ψ, respectively. Asterisks "*" show values obtained directly from the expert. Values without asterisks were inferred by using the previous values and the property of monotony. The sixth column shows indexes of vectors for which we can use monotony if a given function evaluates to 1 (i.e., true value). For instance, for vector #7.1 we have f1(00100) = 1 (as given by the expert), hence we can also set (by utilizing the monotony property) f1(00101) = 1, because vector (00101) is vector #7.2. We can also do the same with vector #10.4. Similarly, the seventh column represents the indexes of vectors for which we can use monotony if a given function evaluates to 0 (i.e., false value). For instance, for the vector #6.1 we have: f1(00010) = 0 (as given by the expert) and thus we can infer (by utilizing monotony) that f1(00000)=0, where (00000) is vector #10.1. The values in column 3 are derived by "up-down sliding" in table 1 according to the following six steps: Step 1. Begin from the first vector #1.1 (i.e., (01100)). Ask the expert about the value of f1(01100). Action: In this case the expert reported that: f1(01100)=1 Step 2. Write "1*" next to vector #1.1 (and under column 3). Recall that an asterisk denotes an answer directly 22
  • 23. provided by the expert. Apparently, if the reply were false (0), then we had to write "0*" in column 3. The case of having a false value corresponds to column 7 in table 1. Step 3. Go to column 6 to find vectors to extend the true (1) value. Action: In this case vectors #1.2, #6.3, and #7.3 should also yield (because of monotony) true value (1). Therefore, in column 3 and next to examples #1.2, #6.3, and #7.3 the values of the function must be (1) (note that now no asterisk is used). Step 4. Go to vector #1.2 (next vector while sliding down). Check whether the value of f1(11100) has already been fixed or not. If the value of f1(11100) is not fixed (i.e., it is empty), then repeat steps 1-3, above, for this new vector (i.e., vector #1.2). If f1(11100) is not empty (i.e., it has been already fixed), then go to the next vector while sliding down (i.e., move to vector #2.1). (Note that if the value has not been fixed yet, then we will denote it by fi(x) = e; for empty.) Action: f1(11100)me. Therefore, extend the values of the function for the vectors #6.4 and #7.4. and go to the next vector in the table (i.e., move to vector #2.1). Step 5. The next vector is #2.1. Continue as above with step 4 but now we have vector #2.1 (instead of vector #1.2). The above procedure is repeated until all vectors and functions have been covered. The interpretation of what happens in columns 8 to 11 is provided in the next section. In order one to construct formula (5-3) (which was shown in section 5.2) one needs to concentrate in the information depicted in columns 2 and 3 in table 1. One needs to take the first vector marked with "1*" in each one of the chains and construct for each of these vectors a conjunction of non-zero components. For instance, for the vector (01010) in chain 2 the corresponding conjunction is x2x4. Similarly, from chain 6 we have taken the "1" components in the vector (00110) and formed the conjunction x3x4. The formulas (5-1), (5-2) and (5-4) were obtained in a similar manner. 5.4. Sequential Search for Nested Functions 23
  • 24. Next we show how it is possible to further decrease the number of calls using a modification of the algorithm described for solving problem 2. Recall that the functions f1 and f2 are nested because E2+ E1+. That is, for any input example x the following is true: f2(x)  f1(x). The last statement means that if we recognize cancer, then we recognize the necessity of biopsy too. The property f2(x)  f1(x) means that if f2(x)=1, then f1(x)=1 (5-7) and if f1(x)=0, then f2(x)=0. (5-8) The above realization permits to avoid calls to the expert for determining the right hand sides of (5-7) and (5-8), if one knows the values of the left hand side. The above idea can be used to make the dialog with the expert more efficient. This was done in this experiment and is described in table 1. Column 8 in table 1 shows the results of using the values of f2 (column 4) and property (5-7) to restore function f1. For instance, for vector #6.2 according to the expert we have f2(00110)=1. Hence, by using (5-7) we should also have f1(00110)=1 without an additional call to the expert. Column 9 represents the same situation for f2, if one knows the expert's answers for f1. In this case we use the pair <relation (5-8), column (4)> instead of the pair <relation (5-7), column 5>, as was the case before. Note that the asterisks "*" in columns 8 and 9 show the necessarily needed calls. In order to restore function f1 by using information regarding values of f2 we asked the expert 7 times, instead of the 13 calls we had to use for the independent restoration of function f1. That is, now we were able to use about 2 times less calls. In particular, the value f1(10001)=1 for vector #9.2 was inferred from the fact f2(10001)=1. However, to restore function f2 by using f1 we asked the expert 13 times. That is, we asked him as many times as during the independent restoration of f2 (i.e., the nested approach was not beneficial in this case). This should not come as surprise because the limits described in the previous sections are upper limits. That is, on the average the sequential search is expected to outperform a non-sequential approach, but cases like the last one can still be expected. The previous analysis shows that, in this particular application, the most effective way was at first to restore f2 with 13 calls and next to use this function values to restore f1, which required only 7 additional calls. Restoration of f2 by using information of values of function f1 required 13 calls, and to restore both functions in E5 would had required 13 + 13 = 26 calls. In the sequence of restoration <f2 , f1> the total amount of calls to restore both functions is 13 + 7 = 20 calls in comparison with 13 + 13=26 calls for independent restoration. We should also add to both cases 13 + 4 calls which were needed to restore functions ϕ and ψ. Therefore, in total we needed 24
  • 25. 20 + 17 = 37 and 26 + 17 = 43 calls, respectively. Note that we began from 2*2048 and 2*924 calls for both functions. Our totals of 37 and 43 calls are about 100 times less than the number of non optimized calls (i.e., 2*2,048) and about 50 times less than the upper limit guaranteed according to Hansel lemma (i.e., the 2*924 calls). 5.5. A Joint Search Approach for Nested Functions Next we study the possibility to decrease the number of calls once more with a joint search approach of f1 and f2 by using the following switching strategy: Step 1. Ask the expert for the value of f2(x1.1) for vector #1.1. Step 2. If f2(x1.1)=1, then ask for the first vector of the next chain. That is, ask for the value of f2(x2.1) for #2.1. Otherwise, ask for the value of f1(x1.2). Step 3. If f1(x1.1)=0, then ask for the value of f1(x1.2) for vector #1.2. Otherwise, switch to ask for the value of f2(x1.1). The generalization of the previous steps for arbitrary xik is done with steps A and B. Step A is for f1 and step B for f2. These steps are best described as follows: Step A. If f1(xi.k)=0 and vector #(i.k) is not the last one in the current chain and f1(xi,k+1)=e (i.e., empty), then ask the expert for the value of f1(xi.k+1). Otherwise, ask for the value for the first vector xi+1.j from the next chain such that f1(xi+1.j) = e. If f1(xi.k) = 1 and f2(xi,k) = e, then ask for the value of f2(xi,k). If f1(xi.k) = 1 and f2(xi,k) = 0, then ask for the value of f2(y), where y is the first vector from the same or the next chain such that f2(y)=e. Step B. If f2(xi.k) = 1, then ask for the first vector y of the next chain such that f2(y) = e. If f2(xi.k) = 0 and f1(xi.k) = e, then ask the expert for the first vector y such that f1(y)=e. The results of applying this strategy for restoring the two functions f1 and f2 are presented in table 1, columns 10 and 11, respectively. The number of calls to restore both f1 and f2 is equal to 22 (see table 1, columns 10 and 11) by asking about values of f1 8 times and about values of f2 14 times (see also values marked with "*" in columns 10 and 11). Note that the previous algorithm required 22 calls. Next in tables 2 and 3 we summarize the various results for interviewing the expert under the different strategies. Table 2 represents numbers of calls for different kinds of searches for the functions f1, f2, ψ in E5 and for ϕ in E3. Table 3 represents results for E11. These results summarize the numbers of calls needed for the full restoration of f1 and f2 as functions of 11 binary variables at level 2. In particular, note that the independent search of f1 required 13 calls in E5, 13 calls for ψ and 4 calls for ϕ, i.e., the 30 calls shown in table 3. The sequential search <f2 , f1> required the same number of calls for f2, but only 7 calls for f1, because we used the same and ψ functions found for f1 (see also tables 2 and 3). 25
  • 26. The total amount shown in column "f1 ,f2" (i.e., under column 5) represents the number of calls to restore both functions. For the last four hierarchical searches in this column we excluded the non necessary calls for the second restoration of ψ and ϕ, i.e., the 17 calls. For instance, independent search required 30 calls to restore both f1 and f2 in E5 ,i.e., a total 43 calls in E11, as shown in this column. Next, let us describe the meaning of indexes 1 and 2 in tables 2 and 3. We denote the upper limit of calls for non optimized search as N1 and the analogous amounts for other ways of the search for both functions (columns 5) as Ni (i=2,3,4,5,6). In these terms Index1=N1/Ni (i=2,3,4,5,6) and index2=N2/Ni (i=3,4,5,6). In particular index1 shows that we used 3.2 times less calls than N1 and 2.0 times less calls than N2 in E5, and also 110.7 times less than N1 and 49.9 times less calls than N2 in E11. These interactive experiments demonstrate the potential for achieving significantly high efficacy of the proposed approach for interactive restoration of monotone Boolean functions. Table 2. Comparison of results in E5 and E3. # ways of search f1 f2 f1, f2 index1 index2 ϕ ψ 1 2 3 4 5 6 7 8 9 1 Non-optimized search (upper limit) 32 32 64 1 - 8 32 2 Optimal search (upper limit) 20 20 40 1.6 1 6 20 3 Independent search 13 13 26 2.5 1.5 4 13 4 Sequential search <f1, f2> 13 13 26 2.5 1.5 5 Sequential search <f2, f1> 7 13 20 3.2 2.0 6 Joint search - - 22 2.9 1.8 26
  • 27. Table 3. Comparison of results in E11. # ways of search f1 f2 f1, f2 index1 index2 1 2 3 4 5 6 7 1 Non-optimized search (upper limit) 2,048 2,048 4,096 1 - 2 Optimal search (upper limit) 924 924 1,848 2.22 1 3 Independent search 30 30 43 95.2 42.9 4 Sequential search <f1, f2> 30 13 43 95.2 42.9 5 Sequential search <f2, f1> 7 30 37 110.7 49.9 6 Joint search - - 39 105.0 47.4 6. COMPUTATIONAL INTERACTIVE HIERARCHICAL EXPERIMENT II. We suppose the same hierarchy of binary features and the same functions ϕ and ψ as in the previous experiment. However, now we consider the same problem with different f1 and f2 functions. We restore fi as a superposition of the fi, ϕ and ψ functions: fi(x1,x2,x3,x4,x5) = fi(ϕ (w1,w2,w3), ψ(y1,y2,y3,y4,y5), x2, x3, x4, x5)), where the hidden functions are: f1(x) = x1x2 ∨ x1x3 ∨ x4 ∨ x5, (6-1) and f2(x) = x1x2 ∨ x1x3 ∨ x1x4 ∨ x2x4 ∨ x3x4 ∨ x5. (6-2) The offered optimized dialogue to restore f1(x1,x2,x3,x4,x5) really required 12 calls (see table 4, column 3). So we used 1.66 times less calls than Hansel's border (i.e., the 20 calls) and 2.66 times less than in non-optimized dialogue (i.e., the 32 calls). The other hidden functions are x1=ϕ(w1,w2,w3)=(w2 ∨ w1w3) and ψ(y1,y2,y3,y4,y5)=(y1 ∨ y2∨ y3 ∨ y4) The dialogue for f2(x1,x2,x3,x4,x5) required 14 calls (see also table 4, column 4). This is by 1.4 times less than Hansel's limit (of 20 calls) and 2.2 times less in comparison with the non-optimized dialogue (of 32 calls). Next, we obtained f1 and f2 from table 4 as being the following DNF: f1(x) = x2x4 ∨ x1x2 ∨ x1x4 ∨ x1x3 ∨ x4∨ x3x5 ∨ x2x5 ∨ x1x5 ∨ x5, (6-3) and f2(x) = x2x4∨x1x2∨x1x4∨x1x3∨x3x4∨x2x3x5∨x2x5∨x1x5∨x5. (6-4) Simplification of these DNF expressions allowed us to exclude some conjunctions, which are not lower units and obtain (6-1) and (6-2), respectively. For instance, in (6-3) x1x5 is not a minimal lower unit, since x5 covers it. 27
  • 28. Table 4. Dialogue Sequences for Experiment II. Extension # Vector f1 f2 ψ 1⇒1 0⇒0 f1~ f2~ f1^ f2^ 1 2 3 4 5 6 7 8 9 10 11 Chain 1. 1.1 (01100) 0* 0* 1* 1.2;6.3;7.3 6.4;7.4 0* 0 0* 0* 1.2 (11100) 0* 0* 1 6.4;7.4 5.1;3.1 0* 0 0* 0 Chain 2. 2.1 (01010) 1* 1* 1* 2.2;6.3;8.3 6.1;8.1 1 1* 1* 1* 2.2 (11010) 1 1 1 6.4;8.4 3.1 1 1 1 1 Chain 3. 3.1 (11000) 1* 1* 1* 3.2 8.1;9.1 1 1* 1 1* 3.2 (11001) 1 1 1 7.4;8.4 8.2;9.2 1 1 1 1 Chain 4. 4.1 (10010) 1* 1* 1* 4.2;9.3 6.1;9.1 1 1* 1 1* 4.2 (10110) 1 1 1 6.4;9.4 6.2;5.1 1 1 1 1 Chain 5. 5.1 (10100) 1* 1* 1* 5.2 7.1;9.1 1 1* 1 1* 5.2 (10101) 1 1 1 7.4;9.4 7.2;9.2 1 1 1 1 Chain 6. 6.1 (00010) 1* 0* 1* 6.2;10.3 10.1 1* 0* 1* 0* 6.2 (00110) 1 1* 1 6.3;10.4 7.1 1 1 1 1* 6.3 (01110) 1 1 1 6.4;10.5 1 1 1 1 6.4 (11110) 1 1 1 10.6 1 1 1 1 Chain 7. 7.1 (00100) 0 0 1* 7.2;10.4 10.1 0 0 0 0 7.2 (00101) 1* 0* 1 7.3;10.4 10.2 1* 0* 1* 0* 7.3 (01101) 1 1* 1 7.4;10.5 8.2;10.2 1 1* 1 1* 7.4 (11101) 1 1 1 5.6 1 1 1 1 Chain 8. 8.1 (01000) 0 0 1* 8.2 10.1 0 0 0 0 8.2 (01001) 1* 1* 1 8.3 10.2 1 1* 1 1* 8.3 (01011) 1 1 1 8.4 10.5;10.3 1 1 1 1 8.4 (11011) 1 1 1 10.6 9.3 1 1 1 1 Chain 9. 9.1 (10000) 0* 0* 1* 9.2 10.1 0* 0 0* 0* 9.2 (10001) 1* 1* 1 9.3 10.2 1 1* 1 1* 9.3 (10011) 1 1 1 9.4 10.3 1 1 1 1 9.4 (10111) 1 1 1 10.6 10.4 1 1 1 1 Chain 10. 10.1 (00000) 0 0 0* 10.2 0 0 0 0 10.2 (00001) 1* 1* 0* 10.3 1* 0 1* 0 10.3 (00011) 1 1 1* 10.4 1 1* 1 1* 10.4 (00111) 1 1 1 10.5 1 1 1 1 10.5 (01111) 1 1 1 10.6 1 1 1 1 10.6 (11111) 1 1 1 1 1 1 1 Calls 12 14 12 6 10 20 28
  • 29. Below we present the superposition of formulas (6-1)-(6-4) for f1 and f2 as the final formulas with : f1(x) = x1x2∨x1x3∨x4∨x5 = (w2∨w1w3)(y1∨y2∨y3∨y4)∨ (w2∨w1w3)x3∨x4∨x5, and f2(x)=(w2∨w1w3)(y1∨y2∨y3∨y4)∨ (w2∨w1w3)x3∨ (w2∨w1w3∨y1∨y2∨y3∨y4∨x3)x4∨x5. In total, we needed 12 + 12 + 4 = 28 calls to restore f1 and 14 + 12 + 4 = 30 calls to restore f2 independently. We actually used 12 + 14 + 12 + 4 = 42 calls instead of the 58 calls needed to restore both functions. We used independent, sequential, and joint search to restore these functions. Some final results are presented in tables 4 to 6. Table 5 represents the numbers of calls for different kinds of searches in E5 for the functions f1, f2, ψ and in E3 for ϕ. Table 6 represents the results for E11. These tables summarize the numbers of calls needed for the full restoration of f1 and f2 as functions of 11 binary variables at level 2. In particular, independent search of f1 required 12 calls for f1 in E5, 12 calls for ψ and 4 calls for ψ. That is, the 28 calls shown in table 6. The sequential search <f1, f2> required the same number of calls for f1, but only 10 calls for f2, because we used the same ϕ and ψ component functions which were found for f1. The total amount shown in column labeled "f1, f2" in tables 5 and 6 represents the number of calls required to restore both functions. For the last four hierarchical searches in this column we excluded non necessary second restoration of ψ and ϕ, i.e., 16 calls. For instance, the independent search required 26 calls to restore both f1 and f2 in E5, i.e., the total 42 calls in E11 shown in this column. Similarly as in the previous section, index1 shows that we used 3.2 times less calls than N1 and 2.0 times less calls than N2 in E5, and also 113,8 times less calls than N1 and 51.3 times less calls than N2 in E11. Table 5. Comparison of results in E5 and E3. # ways of search f1 F2 f 1, f 2 index1 index2 ϕ ψ 1 2 3 4 5 6 7 8 9 1 Non-optimized search (upper limit) 32 32 64 1 - 8 32 2 Optimal search (upper limit) 20 20 40 1.6 1 6 20 3 Independent search f1 and f2 12 14 26 2.5 1.5 4 12 4 Sequential search <f1, f2> 12 10 22 2.7 1.8 5 Sequential search <f2, f1> 6 14 20 3.2 2.0 6 Joint search - - 19 3.2 2.0 Table 6. Comparison of Results in E11. # ways of search f1 f2 f1, f2 index1 index2 29
  • 30. 1 2 3 4 5 6 7 1 Non-optimized search (upper limit) 2,048 2,048 4,096 1 - 2 Optimal search (upper limit) 924 924 1,848 2.22 1 3 Independent search f1 and f2 28 30 42 97.5 44.0 4 Sequential search <f1, f2> 28 10 38 107.8 48.6 5 Sequential search <f2, f1> 6 30 36 113.8 51.3 6 Joint search - - 36 113.8 51.3 7. CONCLUDING REMARKS Some computational experiments (see, for instance, (Gorbunov and Kovalerchuk, 1982) and (Triantaphyllou and Soyster, 1995)) have shown that it is possible to significantly decrease the number of questions to an oracle in comparison with the full number of questions (which is equal to 2 n) and also in comparison with a guaranteed pessimistic estimation (formula (2-2) in section 2) for many functions. Some close form results were also obtained for a connected problem of retrieval of maximal upper zero (Kovalerchuk and Lavkov, 1984). The results in this paper demonstrate that an interactive approach, based on monotone Boolean functions, has the potential to be very beneficial to interactive machine learning. Acknowledgements The authors are very grateful to Dr. James F. Ruiz, from the Woman's Hospital of Baton Rouge, LA, for his expert assistance in formulating the medical example described in this paper. REFERENCES 1. Alekseev, V.B. (1988), "Monotone Boolean Functions". Encyclopedia of Mathematics, v. 6, Kluwer Academic Publishers, 306-307. 2. Blumer A. Ehrenfeucht A. Haussler D. and Warmuth M.K. (1989) Learnability and the Vapnik- Chervonenkis dimension, Journal of the Association of Computing Machinery, 36 (4), 929-965. 3. Bongard, M. (1967), Pattern Recognition, Moscow, "Nauka" Publ. (in Russian, English translation, 30
  • 31. 1970, by Spartakos Press, NY). 4. Boros, E., P.L. Hammer, and T. Ibaraki (1994), "Predicting Cause-Effect Relationships from Incomplete Discrete Observations." SIAM Journal on Discrete Mathematics, Vol. 7, No. 4, 531-543. 5. Cohn, D., L. Atlas and R. Ladner (1994), "Improving Generalizing with Active Learning," Machine Learning, Vol. 15, 201-221. 6. Dedekind R., "Ueber Zerlegungen von Zahlen durch ihre grossten gemeinsamen Teiler", Festschrift Hoch. Braunschweig, 1987 u.ges. Werke, II. pp. 103-148. 7. Dietterich, T.C., and R.S. Michalski (1983), "A Comparative Review of Selected Methods for Learning from Examples", R.S. Michalski, J.G. Carbonell, and T.M. Mitchell (eds.). Machine Learning: An Artificial Intelligence Approach, Tioga Publishing Company, Palo Alto, CA, 41-81. 8. Fu, L.M. (1993), "Knowledge-Based Connectionism for Revising Domain Theories," IEEE Transactions on Systems, Man, and Cybernetics, v. 23, n. 1, 173-182. 9. Goldman and Sloan R.H.(1992) The Power of Self-Directed Learning, Machine Learning, Vol. 14, 271-294. 10. Gorbunov, Yu, and Kovalerchuk, B. (1982), "An Interactive Method of Monotone Boolean Function Restoration", Journal of Academy of Science of UzSSR, Engineering, v. 2, 3-6 (in Russian). 11. Hammer, P.L., and E. Boros (1994), "Logical Analysis: An Overview," RUTCOR Research Report, Rutgers University, NJ. 12. Hansel, G. (1966), "Sur le nombre des fonctions Boolenes monotones den variables". C.R. Acad. Sci. Paris, v. 262, n. 20, 1088-1090. 13. Haussler D. (1988) Quantifying inductive bias: AI learning algorithms and Valiant's learning framework, Artificial Intelligence, 36, 177-221. 14. Haussler D. and Warmuth M.(1993), The Probably Approximately Correct (PAC) and Other Learning Models. Chapter in: Foundation of Knowledge Acquisition: Machine Learning, A.L. Meyrowitz and S. Chipman (Eds), Kluwer Academic Publishers, Norwell, MA, 291-312. 15. Hattoru, K. and Y. Torri, "Effective Algorithms for the Nearest Neighbor Method in the Clustering Problem," Pattern Recognition, Vol. 26, No. 5, pp. 741-746, 1993. 16. Kamath, A.P., N.K. Karmakar, K.G. Ramakrishnan, and M.G.C. Resende (1992), "A Continuous 31
  • 32. Approach to Inductive Inference," Math. Progr., Vol. 57, 215-238. 17. Kamath, A.P., N.K. Karmakar, K.G. Ramakrishnan, and M.G.C. Resende (1994), "An Interior Point Approach to Boolean Vector Synthesis," Proceedings of the 36-th MSCAS, 1-5. 18. Kamgar-Parsi, B. and L.N. Kanal (1985), "An Improved Branch-And-Bound Algorithm for Computing k-Nearest Neighbors," Pattern Recognition Letters, Vol. 3, 7-12. 19. Kurita, T. (1991), "An Efficient Agglomerative Clustering Algorithm Using a Heap," Pattern Recognition, Vol. 24, No. 3, 205-209. 20. Kleitman, D. (1969), "On Dedekind's problem: the number of monotone Boolean functions". Proc. Amer. Math. Soc. 21, 677-682. 21. Korobkov V.K. (1965) On monotone Boolean functions of algebra logic, In Problemy Cybernetiki, v.13, "Nauka" Publ., Moscow, 5-28 (in Russian). 22. Kovalerchuk, B., and Lavkov, V. (1984), "Retrieval of the maximum upper zero for minimizing the number of attributes in regression analysis". USSR Computational Mathematics and Mathematical Physics, v. 24, n. 4, 170-175. 23. Kovalerchuk, B., E. Triantaphyllou, and E. Vityaev, (1995),"Monotone Boolean Functions Learning Techniques Integrated with User Interaction." In: Proc. of Workshop "Learning from examples vs. programming by demonstration", 12-th International Conference on Machine Learning, Tahoe City, U.S.A., 41-48. 24. Kovalerchuk, B., E. Triantaphyllou, and J.F. Ruiz (1995), "Monotonicity and Logical Analysis of Data: A Mechanism of Evaluation of Mammographic and Clinical Data," Technical Report, Louisiana State University, Dept. of Ind. Eng., Baton Rouge, LA 70803-6409. 25. Mangasarian, O.L., W.N. Street, and W.H. Woldberg (1995), "Breast Cancer Diagnosis and Prognosis Via Linear Programming," Operations Research, Vol. 43, No. 4, 570-577. 26. Mitchell T.(1980), The need for biases in learning generalizations, Technical Report CBM-TR-117, Rutgers University, New Brunswick, NJ. 27. Murphy P.M. and Aha D.W.(1994) UCI repository of machine learning databases. Machine-readable data repository, Irvine, CA,University of California, Department of Information and Computer Science. 28. Natarajan B.K. (1989), On learning sets and functions, Machine Learning, 4(1), 123-133. 32
  • 33. 29. Rudeanu, S., (1974), "Boolean functions and equations", North-Holland, NY. 30. Shavlik, J.W. (1994), "Combining Symbolic and Neural Learning," Machine Learning, Vol. 14, 321-331. 31. Triantaphyllou, E, A.L. Soyster, and S.R.T. Kumara (1994), "Generating Logical Expressions from Positive and Negative Examples via a Branch-and-Bound Approach," Computers and Operations Research, Vol. 21, No. 2, 185-197. 32. Triantaphyllou, E.(1994), "Inference of a minimum size Boolean function examples by using a new efficient branch-and-bound approach". Journal of Global Optimization, v. 5, n. 1, 69-94. 33. Triantaphyllou, E. and A. Soyster (1995), "An approach to guided learning of Boolean functions", to appear in: Mathematical and Computer Modeling. 34. Vapnik V.N. (1982) Estimating of Dependencies Based on Empirical Data, Springer-Verlag, New York, NY. 35. Vityaev, E., and A. Moskvitin (1993), "Introduction to discovery theory". Program system: DISCOVERY, Logical Methods in Informatics, Computational Systems (Vychislitel'nye sistemy), Institute of Mathematics, Russian Academy of Science, Novosibirsk, n. 148, 117-163 (in Russian). 35. Woldberg, W.W., and O.L. Mangasarian (1990), "Multisurface Method of Pattern Separation for Medical Diagnosis Applied to Breast Cytology," Proceedings of the National Academy of Sciences of the USA, v. 87, n. 23, 9193-9196. 36. Yablonskii, S. (1986), "Introduction to discrete mathematics", Moscow, "Nauka" Publ. (in Russian). 37. Zagoruiko, N. (1979), "Empirical Forecast", "Nauka" Publ., Novosibirsk (in Russian). 33