Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

An Inductive inference Machine

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Wird geladen in …3
×

Hier ansehen

1 von 26 Anzeige

An Inductive inference Machine

Solomonoff's theory of inductive inference is Ray Solomonoff's mathematical formalization of Occam's razor. It explains observations of the world by the smallest computer program that outputs those observations. Solomonoff proved that this explanation is the most likely one, by assuming the world is generated by an unknown computer program. That is to say the probability distribution of all computer programs that output the observations favors the shortest one.

Prediction is done using a completely Bayesian framework. The universal prior is calculated for all computable sequences—this is the universal a priori probability distribution; no computable hypothesis will have a zero probability. This means that Bayes rule of causation can be used in predicting the continuation of any particular computable sequence.

Solomonoff's theory of inductive inference is Ray Solomonoff's mathematical formalization of Occam's razor. It explains observations of the world by the smallest computer program that outputs those observations. Solomonoff proved that this explanation is the most likely one, by assuming the world is generated by an unknown computer program. That is to say the probability distribution of all computer programs that output the observations favors the shortest one.

Prediction is done using a completely Bayesian framework. The universal prior is calculated for all computable sequences—this is the universal a priori probability distribution; no computable hypothesis will have a zero probability. This means that Bayes rule of causation can be used in predicting the continuation of any particular computable sequence.

Anzeige
Anzeige

Weitere Verwandte Inhalte

Ähnlich wie An Inductive inference Machine (20)

Weitere von Aly Abdelkareem (16)

Anzeige

Aktuellste (20)

An Inductive inference Machine

  1. 1. An Inductive Inference Machine (1957) R. J. Solomonoff 1926 - 2009 Aly O. Abdelkareem University of Calgary
  2. 2. What is Inductive Inference ? • Inductive inference is the process of reaching a general conclusion from specific examples. • The general conclusion should apply to unseen examples.
  3. 3. Problem • We will feed the machine these examples • Here, we want the machine to decide upon the most probable digit to fit into the empty square • Then, we present the machine with this problem
  4. 4. Solutions N-grams and Prediction n-grams N-tuples and Structures N-gram Sets, Prediction n-gram sets, N-tuple sets
  5. 5. N-Grams Shannon has show how we may predict words or letters in English by the use of “n-grams” If we want to predict the next letter in ”Today we ar?” • Using bi-gram - Higher frequency of ”ra, rb, rc, re, rd … etc”
  6. 6. 1. N-grams and Prediction n-grams • N-grams: extend it to 2-grams • Possibilities: Prediction is consistent with the other examples Consistency: of a Prediction n-gram to indicate that all of its predictions, as applied to the examples given to the machine, have been correct
  7. 7. “=” and “0” are so close What if we want the machine to find the solution related to the “=“ ? Is a spacer and it is not a proper part of the prediction 3-gram
  8. 8. • N-tuples: is an ordered set of n object. • Structure: set of instruction for taking the members of an N-tuple and moving them in a certain way. 2- N-tuples and Structures
  9. 9. Applying N-tuple and Structure • The structure
  10. 10. What if we have new examples (unseen data) ? It is unable to solve this problem, since it cannot yet have learned the prediction 3-gram
  11. 11. 3. Sets: N-gram Sets, Prediction n-gram sets, N-tuple sets • N-gram sets: unordered set of n-gram • Prediction N-gram set: unordered set of prediction N grams • N-tuple set: unordered set of N-tuple
  12. 12. Using sets we can then apply Cartesian product Boolean product Boolean sum Occurrence operation
  13. 13. Occurrence Operation
  14. 14. Applying to our problem N-gram set 3-tuple set Using this structure Prediction on n-gram set will be
  15. 15. Limitations High order sets: Inability to deal with members are themselves sets • Resolved by defining an N tuple recursively. Non- determistic: Cannot work on problem with several possible answers ( language translation, weather prediction or information retrieval. Generalization: Small error of input disturb the machine • Chomsky machine to find grammatically correct sentences
  16. 16. The concept of Utility • Utility values are assigned to each abstraction used • For example: 1. Prediction n grams: Consistent is 1 value – high utility – 2. Structures or N-tuple: utility values proportional to the frequency of create consistent prediction • Used to get a priori probability of consistency of a new created prediction
  17. 17. Mode of Operation of Machine Computer starts out with a small sets, along with apriori utility assigned to all of them Using set of transformation rules, the machine creates a new set of abstractions from the old set Select combination with high apriori utilities and apply empirical evaluation. Keep the new good abstraction and then Repeat Goal: • Find one that fit the problem • if more than one fit, then find the one with highest utility • If the conflicting utility, then the answer will not be reliable ( not many examples )
  18. 18. Important problems that must yet be solved SEARCHING FOR CONSISTENT PREDICTION THAT FIT A PARTICULAR QUESTION SET OF RULES FOR THE MANIPULATION OF THE SETS NEED TO BE INVESTIGATED ASSIGNING UTILITY MUST BE WORKED OUT IN GREATER DETAILS THE OPERATING PROGRAM OF THE STOCHASTIC MACHINE HAS BEEN INVESTIGATED AT GREATER LENGTH METHODS OF GENERATING NEW ABSTRACTION FROM USEFUL OLD ONES MAY E ADEQUATE. REALIZING PHYSICALLY AN INDUCTIVE INFERENCE MACHINE (STORING INPUT AND ACCESS SPEED ) 3.75 MB 25 ms
  19. 19. Conclusion • A program has been written for a computer to learn to work simple arithmetic problems after being shown a set of correctly worked examples
  20. 20. Summary Accuracy of inference depends upon presented data Machines takes examples that have been usefull in the past and derive new reasonable examples Machine to work on the problem of improving itself given statistical training sequence and probability distribution The machine will be able to prove theorems, play good chess, and answer questions in English. Example: Machine probably be able to recognize the difference between “grammatically correct or incorrect”, providing a training sequence of grammatically correct sentence
  21. 21. This paper Implications Solomonoff's theory of inductive inference A mathematical formalization of Occam's razor and Principle of Multiple Explanations. Assumed the world is generated by an unknown computer program. Based on Algorithmic probability (Solomonoff probability) Prediction is done using a completely Bayesian framework.
  22. 22. Background Algorithms — We’re looking for an algorithm to determine truth. Induction — By “determine truth”, we mean induction. Occam’s Razor — How we judge between many inductive hypotheses. Probability — Probability is what we usually use in induction. The Problem of Priors — Probabilities change with evidence, but where do they start? The Solution Binary Sequences — Everything can be encoded as binary. All Algorithms — Hypotheses are algorithms. Turing machines describe these. Solomonoff's Lightsaber — Putting it all together. Formalized Science — From intuition to precision. Approximations — Ongoing work towards practicality. Unresolved Details — Problems, philosophical and mathematical.
  23. 23. Solomonoff induction Algorithm Make an observation. Form a hypothesis that explains the observation. Conduct an experiment that will test the hypothesis. If the experimental results disconfirm the hypothesis, return to step #2 and form a hypothesis not yet used. If the experimental results confirm the hypothesis, provisionally accept the hypothesis. Now we’ve found the truth, as best as it can be found.
  24. 24. More Details Bayes' rule: Guide https://arbital.com/p/bayes_rule/?l=1zq An Intuitive Explanation of Solomonoff Induction https://www.lesswrong.com/posts/Kyc5d FDzBg4WccrbK/an-intuitive-explanation- of-solomonoff-induction How Bayes' theorem is consistent with Solomonoff induction https://www.lesswrong.com/posts/5pgsb B5sqC2wLwr4d/how-bayes-theorem-is- consistent-with-solomonoff-induction
  25. 25. Thank you

×