Invited Speakers
The invited speakers are Avrim Blum,
Kristian Kersting,
John Shawe-Taylor,
Gábor Lugosi and
Gianluca Bontempi.
(1) ALT 2016 Keynote Speaker
Avrim Blum:
Learning about Agents and Mechanisms from Opaque Transactions.
In this talk I will discuss some learning problems coming from the
area of algorithmic economics. I will focus in particular on
settings known as combinatorial auctions in which agents have
preferences over items or sets of items, and interact with an
auction or allocation mechanism that determines what items are given
to which agents. We consider the perspective of an outside observer
who each day can only see which agents show up and what they get, or
perhaps just which agents' needs are satisfied and which are not.
Our goal will be from observing a series of such interactions to try
to learn the agent preferences and perhaps also the rules of the
allocation mechanism.
As an example, consider observing web pages where the agents are
advertisers and the winners are those whose ads show up on the given
page. Or consider observing the input-output behavior of a cloud
computing service, where the input consists of a set of agents
requesting service, and the output is a partition of them into some
whose requests are actually fulfilled and the rest that are
not - due to overlap of their resource needs with higher-priority
requests. From such input-output behavior, we would like to learn
the underlying structure. We also consider a classic Myerson
single-item auction setting, where from observing who wins and also
being able to participate ourselves we would like to learn the
agents' valuation distributions.
In examining these problems we will see connections to decision-list
learning and to Kaplan-Meier estimators from medical statistics.
This talk is based on work joint with Yishay Mansour and Jamie Morgenstern.
Avrim Blum is Professor of Computer Science at Carnegie Mellon
University. His main research interests are in Foundations of Machine
Learning and Data Mining, Algorithmic Game Theory (including auctions,
pricing, dynamics, and connections to machine learning), the analysis
of heuristics for computationally hard problems, and Database Privacy.
He has served as Program Chair for the IEEE Symposium on Foundations
of Computer Science (FOCS) and the Conference on Learning Theory
(COLT). He was recipient of the Sloan Fellowship, the NSF National
Young Investigator Award, the ICML/COLT 10-year best paper award, and
the Herbert Simon Teaching Award, and is a Fellow of the ACM.
(2) DS 2016 Keynote Speaker
Kristian
Kersting: Collective attention on the web.
It's one of the most popular YouTube videos ever produced,
having been viewed more than
840 million times. It's hard to understand why this clip is
so famous and actually went viral, since nothing much happens. Two
little boys, Charlie and Harry, are sitting in a chair when
Charlie, the younger brother, mischievously bites Harry's
finger. There's a shriek and
then a laugh. The clip is called "Charlie Bit My Finger--Again!"
Generally, understanding the dynamics of collective attention is
central to an information age where millions of people leave digital
footprints everyday.
So, can we capture the dynamics of collective attention
mathematically? Can we even gain insights into the underlying physical
respectively social processes? Is it for instance fair to call the video
"viral" in an epidemiological sense?
In this talk I shall argue that computational methods of collective
attention are not insurmountable. I shall review the methods we have
developed to characterize,
analyze, and even predict the dynamics of collective attention among
millions of users to and within social media services.
For instance, we found that collective attention to memes and social
media grows and subsides in a highly regular manner, well
explained by economic diffusion models. Using mathematical
epidemiology, we find that so-called "viral" videos show very high
infection rates and, hence,
should indeed be called viral. Moreover, the
spreading processes may also be related to the underlying network structures,
suggesting for instance a physically plausible model of the distance
distributions of undirected networks. All this
favors machine learning
and discovery science approaches that produce physically plausible models.
This work was partly supported by the Fraunhofer ICON project SoFWIReD
and by the DFG Collaborative Research Center SFB 876 project A6.
It is based on joint works with Christian Bauckhage, Fabian Hadiji
and Rastegarpanah.
Kristian Kersting is an Associate Professor in the Computer Science
Department at the Technical University of Dortmund. His main research
interests are data mining, machine learning, and statistical
relational artificial intelligence, with applications to medicine,
plant phenotpying, traffic, and collective attention. He gave several
tutorials at top conferences and co-chaired BUDA, CMPL, CoLISD, MLG,
and SRL as well as the AAAI Student Abstract track and the Starting AI
Research Symposium (STAIRS). Together with Stuart Russell (Berkeley),
Leslie Kaelbling (MIT), Alon Halevy (Goolge), Sriraam Natarajan
(Indiana) and Lilyana Mihalkova (Google) he cofounded the
international workshop series on Statistical Relational AI. He served
as area chair/senior PC for several top conference and co-chaired ECML
PKDD 2013, the premier European venue for Machine Learning and Data
Mining. Currently, he is an action editor of JAIR, AIJ, DAMI, and MLJ
as well as on the editorial board of NGC.
(3)
Joint ALT 2016 and DS 2016 Keynote Speaker
John Shawe-Taylor:
Margin based structured output learning.
Structured output learning has been developed to borrow strength
across multidimensional classifications. There have been approaches
to bounding the performance of these classifiers based on different
measures such as microlabel errors with a fixed simple output structure.
We present a different approach and analysis starting from the assumption
that there is a margin attainable in some unknown or fully connected
output structure. The analysis and algorithms flow from this assumption
but in a way that the associated inference becomes tractable while the
bounds match those attained were we to use the full structure. There
are two variants depending on how the margin is estimated. Experimental
results show the relative strengths of these variants, both
algorithmically and statistically.
John Shawe-Taylor is a professor at the University
College London where he directs the Centre for
Computational Statistics and Machine Learning
and heads the Department of Computer Science.
His research has contributed to a number of fields ranging
from graph theory through cryptography to statistical
learning theory and its applications. However, his main
contributions have been in the development of
the analysis and subsequent algorithmic definition
of principled machine learning algorithms founded in
statistical learning theory. He has co-authored
two influential text books on kernel methods and
support vector machines. He has also been instrumental
in coordinating a series of influential European
Networks of Excellence culminating in the PASCAL networks.
(4) ALT 2016 Tutorial Speaker
Gábor Lugosi:
How to Estimate the Mean of a Random Variable?
Given n independent, identically distributed copies of a
random variable, one is interested in estimating the expected value.
Perhaps surprisingly, there are still open questions concerning this
very basic problem in statistics. In this talk we are primarily
interested in non-asymptotic sub-Gaussian estimates for potentially
heavy-tailed random variables. We discuss various estimates and
extensions to high dimensions. We apply the estimates for
statistical learning and regression function estimation problems.
The methods improve on classical empirical minimization techniques.
This talk is based on joint work with Emilien Joly, Luc Devroye,
Matthieu Lerasle, Roberto Imbuzeiro Oliveira, and Shahar Mendelson.
Gábor Lugosi is an ICREA research professor at
the Department of Economics, Pompeu Fabra University,
Barcelona. His research main interests include the
theory of machine learning, combinatorial statistics,
inequalities in probability, random graphs and random
structures, and information theory. He has co-authored
monographs on pattern classification, on online learning,
and on concentration inequalities.
(5) DS 2016 Tutorial Speaker
Gianluca Bontempi:
Perspectives of feature selection in bioinformatics: from relevance
to causal inference.
A major goal of the scientific activity is to model real phenomena by
studying the dependency between entities, objects or more in general
variables. Sometimes the goal of the modeling activity is simply
predicting future behaviors. Sometimes the goal is to understand the
causes of a phenomenon (e.g. a disease). Finding causes from data is
particular challenging in
data mining and bioinformatics where often the number of features
(e.g. number of SNPs or microarray probes) is huge with respect to the
number of samples. In this context, even when experimental
interventions are possible, performing thousands of experiments to
discover causal relationships between thousands of variables is not
practical. Dimensionality reduction techniques have been largely
discussed and used in bioinformatics to deal with the curse of
dimensionality. However, most of the time these techniques focus on
improving prediction accuracy, neglecting causal aspects. This talk
will discuss a major open issue : may feature selection techniques be
useful also for causal feature selection? Is prediction accuracy
compatible with causal discovery? Some results based on an information
theory approach will be used to illustrate the issue.
Gianluca Bontempi is Full Professor in the Computer Science Department
at the Université Libre de Bruxelles (ULB), Brussels, Belgium,
co-head of the ULB Machine Learning Group and Director of (IB)2, the
ULB/VUB Interuniversity Institute of Bioinformatics in Brussels. His
main research interests are big data mining, machine learning,
bioinformatics, causal inference, predictive modelling and their
application to complex tasks in engineering (forecasting, fraud
detection) and life science. He was Marie Curie fellow researcher, he
was awarded in two international data analysis competitions and he
took part to many research projects in collaboration with universities
and private companies all over Europe. He is author of more than 200
scientific publications, associate editor of PLOS One, member of the
scientific advisory board of Chist-ERA and IEEE Senior Member. He is
also co-author of several open-source software packages for
bioinformatics, data mining and prediction.