Show simple item record

dc.contributor.authorDasgupta, Sanjoy
dc.contributor.authorKalai, Adam Tauman
dc.contributor.authorMonteleoni, Claire
dc.date.accessioned2005-12-22T02:40:49Z
dc.date.available2005-12-22T02:40:49Z
dc.date.issued2005-11-17
dc.identifier.otherMIT-CSAIL-TR-2005-075
dc.identifier.otherAIM-2005-033
dc.identifier.urihttp://hdl.handle.net/1721.1/30585
dc.description.abstractWe start by showing that in an active learning setting, the Perceptron algorithm needs $\Omega(\frac{1}{\epsilon^2})$ labels to learn linear separators within generalization error $\epsilon$. We then present a simple selective sampling algorithm for this problem, which combines a modification of the perceptron update with an adaptive filtering rule for deciding which points to query. For data distributed uniformly over the unit sphere, we show that our algorithm reaches generalization error $\epsilon$ after asking for just $\tilde{O}(d \log \frac{1}{\epsilon})$ labels. This exponential improvement over the usual sample complexity of supervised learning has previously been demonstrated only for the computationally more complex query-by-committee algorithm.
dc.format.extent15 p.
dc.format.extent11491832 bytes
dc.format.extent599624 bytes
dc.format.mimetypeapplication/postscript
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.relation.ispartofseriesMassachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory
dc.subjectAI
dc.subjectactive learning
dc.subjectperceptron
dc.subjectlabel-complexity
dc.subjectmistake bound
dc.subjectselective sampling
dc.titleAnalysis of Perceptron-Based Active Learning


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record