dc.contributor.advisor | Jenssen, Robert | |
dc.contributor.advisoremail | robert.jenssen@phys.uit.no | en |
dc.contributor.author | Kvisle Storås, Ola | |
dc.date.accessioned | 2009-02-16T12:54:46Z | |
dc.date.available | 2009-02-16T12:54:46Z | |
dc.date.issued | 2007-12-17 | |
dc.description.abstract | This thesis is a study of pattern classification based on information theoretic criteria.
Information theoretic criteria are important measures based on entropy and divergence between data distributions.
First, the basic concepts of pattern classification with the well known Bayes classification rule
as a starting point is discussed.
We discuss how the Parzen window estimator may be used to find good density estimates.
The Parzen window density estimator can be used to estimate
cost functions based on information theoretic criteria.
Furthermore, we explain a model of an information theoretic learning machine.
With cost functions based on information theoretic criteria, we argue that a learning machine potentially
captures much more information about a data set than the traditional mean squared error cost (MSE) function.
We find that there is a geometric link between information theoretic cost functions estimated using
Parzen windowing, and mean vectors in a Mercer kernel feature space.
This link is used to propose and implement different classifiers based on the integrated squared error (ISE)
divergence measure, operating implicitly in a Mercer kernel feature space. We also apply spectral methods to implement
the same ISE classifiers working in approximations of Mercer kernel feature spaces.
We investigate the performance of the classifiers when we weight each data point with the
the inverse of the probability density function at that point.
We find that the ISE classifiers working implicitly in the Mercer kernel feature space performs similar
to a Parzen window based Bayes classifier. Using a weighted inner-product definition gives slightly better results for
some data sets, while for other data sets the classification rates are slightly worse.
When comparing the results between the implicit ISE classifier using unweighted data points and the Parzen window
Bayes classifier, some of the results indicate that the ISE classifier favor the classes with highest entropy. | en |
dc.format.extent | 1092544 bytes | |
dc.format.extent | 2070 bytes | |
dc.format.mimetype | application/pdf | |
dc.format.mimetype | text/plain | |
dc.identifier.uri | https://hdl.handle.net/10037/1773 | |
dc.identifier.urn | URN:NBN:no-uit_munin_1538 | |
dc.language.iso | eng | en |
dc.publisher | Universitetet i Tromsø | en |
dc.publisher | University of Tromsø | en |
dc.rights.accessRights | openAccess | |
dc.rights.holder | Copyright 2007 The Author(s) | |
dc.subject.courseID | FYS-3921 | nor |
dc.subject | VDP::Mathematics and natural science: 400::Information and communication science: 420::Simulation, visualization, signal processing, image processing: 429 | en |
dc.subject | Information theoretic learning | en |
dc.subject | Pattern classification | en |
dc.title | Information theoretic learning for pattern classification | en |
dc.type | Master thesis | en |
dc.type | Mastergradsoppgave | en |