Extracting information from the Web for Concept Learning and Collaborative Filtering

Author: William W. Cohen.

Source: Lecture Notes in Artificial Intelligence Vol. 1968, 2000, 1 - 12.

Abstract. Previous work on extracting information from the Web generally makes few assumptions about how the extracted information will be used. As a consequence, the goal of Web-based extraction systems is usually taken to be the creation of high-quality, noise-free data with clear semantics. This is a difficult problem which cannot be completely automated. Here we consider instead the problem of extracting Web data for certain machine learning systems: specifically, collaborative filtering (CF) and concept learning (CL) systems. CF and CL systems are highly tolerant of noisy input, and hence much simpler extraction systems can be used in this context. For CL, we will describe a simple method that uses a given set of Web pages to construct new features, which reduce the error rate of learned classifiers in a wide variety of situations. For CF, we will describe a simple method that automatically collects useful information from the Web without any human intervention. The collected information, represented as ``pseudo-users'', can be used to ``jumpstart'' a CF system when the user base is small (or even absent).

©Copyright 2000 Springer