ALT07Logo The 18th International Conference
on
Algorithmic Learning Theory

AND

DSLogo The 10th International Conference on Discovery Science



Sendai, Japan
October 1 - 4, 2007


TUTORIALS

FOR ALT 2007 AND DS 2007


organized by ALT





Marcus Hutter, Research School of Information Sciences and Engineering, Australian National University, Canberra, Australia

On the Philosophical, Statistical, and Computational Foundations of Inductive Inference and Intelligent Agents



Motivation: The dream of creating artificial devices that reach or outperform human intelligence is an old one, however a computationally efficient theory of true intelligence has not been found yet, despite considerable efforts in the last 50 years. Nowadays most research is more modest, focussing on solving more narrow, specific problems, associated with only some aspects of intelligence, like playing chess or natural language translation, either as a goal in itself or as a bottom-up approach. The dual, top down approach, is to first find a formal (mathematical, not necessarily computational) solution of the general AI problem, and then to consider computationally feasible approximations. Note that the AI problem remains non-trivial even when ignoring computational aspects.

Inductive inference: A key property of intelligence is to learn from experience, build models of the environment from the acquired knowledge, and use these models for prediction. In philosophy this is called inductive inference, in statistics it is called estimation and prediction, and in computer science it is addressed by machine learning.

Intelligent agents: The second key property of intelligence is to exploit the learned predictive model for making intelligent decisions or actions. Together, in computer science this is called reinforcement learning, in engineering it is called adaptive control, and in statistics and other fields it is called sequential decision theory.

Contents: The tutorial will introduce the philosophical, statistical, and computational perspective of inductive inference, and Solomonoff's unifying universal solution. If time permits, also the unified view of the intelligent agent framework will be introduced. Putting everything together, we arrive at an elegant mathematical parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. We will argue that it represents a conceptual solution to the AI problem, thus reducing it to a pure computational problem.

Technical content: Despite the grand vision above, most of the tutorial necessarily is devoted to introducing the key ingredients of this theory, which are important subjects in their own right: Occam's razor; Turing machines; Kolmogorov complexity; probability theory; Solomonoff induction; Bayesian sequence prediction; minimum description length principle; intelligent agents; sequential decision theory; adaptive control theory; reinforcement learning; Levin search and extensions.

Slides

Literature:



Kazuyuki Tanaka, Graduate School of Information Sciences, Tohoku University, Japan

Introduction to Probabilistic Image Processing and Bayesian Networks



Abstract: Bayesian network is one of the methods for probabilistic inferences in artificial intelligence. Some probabilistic models for image processing are also regarded as Bayesian networks. Many researchers in computer sciences and statistics are interested in probabilistic image processing as one of the powerful methods to treat uncertainty of image data in the real world successfully and systematically.

However, many probabilistic models for image processing are massive and it is hard to calculate statistical quantities, for example, averages, variances and so on, exactly. We have to employ an approximate algorithm to calculate statistical quantities. As one of the approximate algorithms for probabilistic inferences by means of Bayesian networks, belief propagations have been investigated. Recently, belief propagations have been applied to the probabilistic image processing.

Moreover, these models are similar to some physical models proposed to investigate properties of materials and some physicists are also interested in mathematical structures of probabilistic image processing. Belief propagations have similar mathematical structures to mean field methods, particularly to Bethe approximation. Some physicists also proposed new algorithms based on belief propagations in the physical point of view. Thus probabilistic image processing can be regarded as one of the interesting topics in the interdisciplinary areas between computer sciences and physics.

In this talk, the statistical aspect and the practical schemes of Bayesian network to probabilistic image processing are reviewed. The first part is an introduction of probabilistic model for image processing based on the basic framework of Bayesian networks. The second part is a brief review of belief propagation. In the third part, we survey fundamental algorithms of belief propagations for probabilistic image processing. In the concluding remarks of the present talk, some recent developments of Bayesian networks and belief propagations in computer sciences are shown.

Slides (Power Point)

Slides (PDF)

Slides (handout-style)

References

  1. K. Tanaka, Statistical-Mechanical Approach to Image Processing (Topical Review),
    Journal of Physics A: Mathematical and General, vol.35, no.37, pp. R81-R150, 2002.
  2. K. Tanaka and J. Inoue, Maximum Likelihood Hyperparameter Estimation for Solvable Markov Random Field Model in Image Restoration,
    IEICE Transactions on Information and Systems, vol. E85-D, no.3, pp. 546-557, 2002.
  3. K. Tanaka, J. Inoue and D. M. Titterington, Probabilistic Image Processing by Means of Bethe Approximation for Q-Ising Model,
    Journal of Physics A: Mathematical and General, vol. 36, no.43, pp. 11023-11036, 2003.
  4. K. Tanaka, H. Shouno, M. Okada and D. M. Titterington, Accuracy of the Bethe Approximation for Hyperparameter Estimation in Probabilistic Image Processing,
    Journal of Physics A: Mathematical and General, vol. 37, no.36, pp. 8675-8696, 2004.
  5. K. Tanaka and D. M. Titterington, Statistical Trajectory of Approximate EM Algorithm for Probabilistic Image Processing,
    Journal of Physics A: Mathematical and Theoretical, vol.40, no.37, pp.11285-11300, 2007.
  6. K. Tanaka, Introduction of Probabilistic Model and Image Processing,
    Morikita Publishing Co., Ltd., 2006 (in Japanese).
  7. K. Tanaka (ed.), Probabilistic Information Processing and Statistical Mechanics,
    Saiensu Publishing Co., 2006 (in Japanese).


Valid HTML 4.0