Hokkaido University
Graduate School of Information Science and Technology
Division of Computer Science
Knowledge Software Science Research Group
Laboratory for Algorithmics
Thomas Zeugmann

Members
Research

Publications

Teaching

TCS Technical Report Series

Archives of ALT-Conferences

Archives of AII-Workshops

COLTBIB

ALT 2014
DS-LOGO

Conferences

Miscellaneous


Hokkaido University

Research

Generally speaking, we are interested in everything around algorithms and the theoretical foundations of computer science. More specifically, we are interested in learning algorithms, inductive inference, recursion theory, theory of computation, property testing, complexity theory, the average-case analysis of algorithms, probabilistic algorithms, parallel algorithms, logic and logic design techniques including their implementation, Kolmogorov complexity, data mining, information security and provacy preserving computations, algorithm design techniques, the design analysis of new data structures, and the various foundations of all these research areas.

The history of algorithms goes back, approximately, to the origins of mathematics at all. For thousands of years, in most cases, the solution of a mathematical problem had been equivalent to the construction of an algorithm that solved it. The ancient development of algorithms culminated in Euclid's famous “Elements”. Unfortunately, after Euclid the design of algorithms faced roughly 2000 years of decline. It got increasingly popular to prove the existence of mathematical objects by using the technique of reductio ad absurdum.

Modern computation theory starts with the question:

Which problems can be solved algorithmically?

It was posed due to the fact that despite enormous efforts of numerous mathematicians several problems had remained unsolved yet, e.g., the construction of an algorithm deciding whether a given Diophantine equation has an integral solution (Hilbert's 10th problem). In order to answer it, first of all, the intuitive notion of an algorithm has to be formalized mathematically. Starting from different points of view, Turing (1936/37), Church (1936), and Gödel (1931) have given their formalizations (i.e., the Turing machine, the λ-calculus, the recursive functions). However, all these notions are equivalent, i.e., each computation in formalism-1 can be simulated in formalism-2 and vice versa. Since then, many other proposals had been made to fix the notion of an algorithm (e.g., by Post and by Markov). This led to the well-known “Church Thesis,” i.e., “the intuitive computable functions are exactly the Turing machine computable ones.”

After the theory explaining which problems can and cannot be solved algorithmically had been well-developed, the attention has turned to the qualitative side, i.e., How “good” a problem can be solved? First influential papers in this direction were written by Rabin (1959) and (1960), as well as by Hartmanis and Stearns (1965). The central topic studied here was to clarify what does it mean to say that a function f is more difficult to compute than a function g. The distinction between problems solvable within a time bounded by a polynomial in the length of the input (that is, problems having a feasible solution) and those not having this property, first was made by J. von Neumann (1953). Since then, the class of polynomial time-bounded algorithms has been an object of intensive and continuing research.

On the other hand, it is not known whether many of the problems that are very important for numerous applications can be solved in polynomial time. To put it mildly, progress needs its time. A good example in this regard is primality testing. Though this problem has already be studied since ancient times, it took until August 6, 2002 to discover a deterministic polynomial-time algorithm that determines whether or not an input number n is prime (cf. M. Agrawal, N. Kayal and N. Saxena).

Continue to the next page.


This page is updated whenever new information is available.

Last update August 2, 2012.


Valid HTML 4.0!