source: distools/disex_protselfd.m @ 79

Last change on this file since 79 was 10, checked in by bduin, 14 years ago
File size: 1.7 KB
RevLine 
[10]1%DISEX_PROTSELFD Example of forward prototype selection
2%
3% This example shows the use of PROTSELFD for a greedy forward
4% selection of prototypes from a square dissimilarity matrix in
5% order to optimize the representation set.
6%
7% The final plot shown is a 'feature curve'. This is the error
8% as a function of the number of features used (here the number
9% of prototypes). The error measure is the mean classification
10% error of the 1-NN rule using the given dissimilarities as
11% distances (KNNDC([],1))
12
13d = readchicken(20,45);  % read dissimilarity dataset
14w = protselfd(a);        % forward feature selection
15n = size(w,2);           % max number of selected prototypes
16                         % random prototype (feature) ranking
17v = featsel(size(d,2),randperm(n));
18
19K = [1 2 3 5 7 10 15 20 30 50 70 100 150 200 300 500 700 1000];
20K = [K(K<n) n];          % dimensionalities to be checked
21
22% In the next step the feature curve is build. Note that features
23% here are prototypes (representation objects).
24% knndc([],1) is used for classification, i.e. use the values in d
25% (or d*w) as distances in the 1-NN rule.
26% Testd is used for evaluation as it is resistant against missing
27% classes (classes not available in the representation set).
28% In 10 repititions 50% of the data is used for training and 50%
29% for testing. Note that final performances are biased as all data
30% is used in this example for prototype selection.
31
32ew = clevalf(d*w,knndc([],1),K,0.5,10,[],testd);
33ew.names = 'Forward selection';
34ev = clevalf(d*v,knndc([],1),K,0.5,10,[],testd);
35ev.names = 'Random selection';
36ev.title = getname(d);
37ev.xlabel = 'Size of Representation Set';
38plote({ew,ev})
Note: See TracBrowser for help on using the repository browser.