Changeset 20
- Timestamp:
- 07/16/11 14:48:08 (13 years ago)
- Location:
- distools
- Files:
-
- 5 edited
Legend:
- Unmodified
- Added
- Removed
-
distools/clevald.m
r18 r20 1 1 %CLEVALD Classifier evaluation (learning curve) for dissimilarity data 2 2 % 3 % E = CLEVALD(D,CLASSF,TRAINSIZES,REPSIZE,NREPS,T )3 % E = CLEVALD(D,CLASSF,TRAINSIZES,REPSIZE,NREPS,T,TESTFUN) 4 4 % 5 5 % INPUT 6 6 % D Square dissimilarity dataset 7 7 % CLASSF Classifiers to be evaluated (cell array) 8 % TRAINSIZE Vector of class sizes, used to generate subsets of D 9 % (default [2,3,5,7,10,15,20,30,50,70,100]) 8 % TRAINSIZE Vector of training set sizes, used to generate subsets of D 9 % (default [2,3,5,7,10,15,20,30,50,70,100]). TRAINSIZE is per 10 % class unless D has no priors set or has soft labels. 10 11 % REPSIZE Representation set size per class (>=1), or fraction (<1) 11 12 % (default total, training set) 12 13 % NREPS Number of repetitions (default 1) 13 14 % T Test dataset (default [], use remaining samples in A) 15 % TESTFUN Mapping,evaluation function (default classification error) 14 16 % 15 17 % OUTPUT … … 20 22 % Generates at random, for all class sizes defined in TRAINSIZES, training 21 23 % sets out of the dissimilarity dataset D. The representation set is either 22 % equal to the training set (REPSIZE = []), or a fraction of it (REPSIZE 24 % equal to the training set (REPSIZE = []), or a fraction of it (REPSIZE <1) 23 25 % or a random subset of it of a given size (REPSIZE>1). This set is used 24 26 % for training the untrained classifiers CLASSF. The resulting trained 25 27 % classifiers are tested on the training objects and on the left-over test 26 28 % objects, or, if supplied, the testset T. This procedure is then repeated 27 % NREPS times. 29 % NREPS times. The default test routine is classification error estimation 30 % by TESTC([],'crisp'). 28 31 % 29 32 % The returned structure E contains several fields for annotating the plot … … 47 50 % P.O. Box 5031, 2600 GA Delft, The Netherlands 48 51 49 function e = cleval (a,classf,learnsizes,repsize,nreps,t)52 function e = clevald(a,classf,learnsizes,repsize,nreps,t,testfun) 50 53 51 54 prtrace(mfilename); 52 55 56 if (nargin < 7) | isempty(testfun) 57 testfun = testc([],'crisp'); 58 end; 53 59 if (nargin < 6) 54 60 t = []; … … 72 78 % smallest class. 73 79 74 mc = classsizes(a); [m,k,c] = getsize(a); 75 toolarge = find(learnsizes >= min(mc)); 76 if (~isempty(toolarge)) 77 prwarning(2,['training set class sizes ' num2str(learnsizes(toolarge)) ... 78 ' larger than the minimal class size in A; remove them']); 79 learnsizes(toolarge) = []; 80 end 80 [m,k,c] = getsize(a); 81 if ~isempty(a,'prior') & islabtype(a,'crisp') 82 classs = true; 83 mc = classsizes(a); 84 toolarge = find(learnsizes >= min(mc)); 85 if (~isempty(toolarge)) 86 prwarning(2,['training set class sizes ' num2str(learnsizes(toolarge)) ... 87 ' larger than the minimal class size; removed them']); 88 learnsizes(toolarge) = []; 89 end 90 else 91 if islabtype(a,'crisp') & isempty(a,'prior') 92 prwarning(1,['No priors found in dataset, class frequencies are used.' ... 93 newline ' Training set sizes hold for entire dataset']); 94 end 95 classs = false; 96 toolarge = find(learnsizes >= m); 97 if (~isempty(toolarge)) 98 prwarning(2,['training set sizes ' num2str(learnsizes(toolarge)) ... 99 ' larger than number of objects; removed them']); 100 learnsizes(toolarge) = []; 101 end 102 end 81 103 learnsizes = learnsizes(:)'; 82 104 … … 92 114 e.appstd = zeros(nw,length(learnsizes)); 93 115 e.xvalues = learnsizes(:)'; 94 e.xlabel = 'Training set size per class'; 116 if classs 117 e.xlabel = 'Training set size per class'; 118 else 119 e.xlabel = 'Training set size'; 120 end 95 121 e.names = []; 96 122 if (nreps > 1) … … 155 181 % this training set in JR(CI,:). 156 182 157 JR = zeros(c,max(learnsizes)); 183 if classs 184 185 JR = zeros(c,max(learnsizes)); 158 186 159 for ci = 1:c 160 161 JC = findnlab(a,ci); 162 163 % Necessary for reproducable training sets: set the seed and store 164 % it after generation, so that next time we will use the previous one. 165 rand('state',seed2); 166 167 JD = JC(randperm(mc(ci))); 168 JR(ci,:) = JD(1:max(learnsizes))'; 169 seed2 = rand('state'); 187 for ci = 1:c 188 189 JC = findnlab(a,ci); 190 191 % Necessary for reproducable training sets: set the seed and store 192 % it after generation, so that next time we will use the previous one. 193 rand('state',seed2); 194 195 JD = JC(randperm(mc(ci))); 196 JR(ci,:) = JD(1:max(learnsizes))'; 197 seed2 = rand('state'); 198 end 199 200 elseif islabtype(a,'crisp') 201 202 rand('state',seed2); % get seed for reproducable training sets 203 % generate indices for the entire dataset taking care that in 204 % the first 2c objects we have 2 objects for every class 205 [a1,a2,I1,I2] = gendat(a,2*ones(1,c)); 206 JD = randperm(m-2*c); 207 JR = [I1;I2(JD)]; 208 seed2 = rand('state'); % save seed for reproducable training sets 209 210 else % soft labels 211 212 rand('state',seed2); % get seed for reproducable training sets 213 JR = randperm(m); 214 seed2 = rand('state'); % save seed for reproducable training sets 215 170 216 end 171 217 … … 187 233 J = []; 188 234 R = []; 189 for ci = 1:c 190 J = [J;JR(ci,1:nj)']; 191 if isempty(repsize) 192 R = [R JR(ci,1:nj)]; 193 elseif repsize < 1 194 R = [R JR(ci,1:ceil(repsize*nj))]; 195 else 196 R = [R JR(ci,1:min(nj,repsize))]; 197 end 198 199 end; 200 201 w = a(J,R)*classf{wi}; % Use right classifier. 202 e0(i,li) = a(J,R)*w*testc; 235 236 if classs 237 for ci = 1:c 238 J = [J;JR(ci,1:nj)']; 239 if isempty(repsize) 240 R = [R JR(ci,1:nj)]; 241 elseif repsize < 1 242 R = [R JR(ci,1:ceil(repsize*nj))]; 243 else 244 R = [R JR(ci,1:min(nj,repsize))]; 245 end 246 end; 247 else 248 J = JR(1:nj); 249 if isempty(repsize) 250 R = JR; 251 elseif repsize < 1 252 R = JR(1:ceil(repsize*nj)); 253 else 254 R = JR(1:min(nj,repsize)); 255 end 256 end; 257 258 trainset = a(J,R); 259 trainset = setprior(trainset,getprior(trainset,0)); 260 w = trainset*classf{wi}; % Use right classifier. 261 e0(i,li) = trainset*w*testfun; 203 262 if (isempty(t)) 204 263 Jt = ones(m,1); 205 264 Jt(J) = zeros(size(J)); 206 265 Jt = find(Jt); % Don't use training set for testing. 207 e1(i,li) = a(Jt,R)*w*testc; 266 testset = a(Jt,R); 267 testset = setprior(testset,getprior(testset,0)); 268 e1(i,li) = testset*w*testfun; 208 269 else 209 e1(i,li) = t(:,R)*w*testc; 270 testset = t(:,R); 271 testset = setprior(testset,getprior(testset,0)); 272 e1(i,li) = testset*w*testfun; 210 273 end 211 274 -
distools/issquare.m
r10 r20 12 12 % DESCRIPTION 13 13 % True is D is a square dissimilarity matrix dataset. This includes 14 % the check whether feature labels equal object labels.15 % If called without an output argument ISSQUARE generates an error16 % if D is not square.14 % the check (in case of crisp dataset D) whether feature labels equal 15 % object labels. If called without an output argument ISSQUARE generates an 16 % error if D is not square. 17 17 18 18 % Copyright: Elzbieta Pekalska, ela.pekalska@googlemail.com … … 26 26 27 27 if m == k 28 n = nlabcmp(getfeatlab(d),getlabels(d)); 29 OK = (n == 0); 28 if islabtype(d,'crisp') 29 n = nlabcmp(getfeatlab(d),getlabels(d)); 30 OK = (n == 0); 31 else 32 OK = 1; 33 end 30 34 else 31 35 OK = 0; -
distools/nne.m
r10 r20 33 33 [d,M] = min(D'); 34 34 e = mean(nlab(M) ~= nlab); 35 NNlab = lablist(nlab(M),:); 35 if islabtype(D,'crisp') 36 NNlab = lablist(nlab(M),:); 37 else 38 labs = gettargets(D); 39 NNlab = labs(M,:); 40 end 36 41 return; -
distools/pe_em.m
r10 r20 40 40 % by D*W. The signature of the obtained PE space (numbers of positive and negative 41 41 % directions) can be found by PE_SIG(W). The spectrum of the obtained space 42 % can be found by PE_SPEC(W). 43 % 44 % A trained mapping can be reduced further by: W = PE_EM(W,ALF) 45 % The signature of the obtained PE space can be found by PE_SIG(W) 46 % The spectrum of 42 % can be found by PE_SPEC(W). 47 43 % 48 44 % SEE ALSO -
distools/plotspectrum.m
r10 r20 12 12 L = getdata(L,4); 13 13 tit = 'Embedding Spectrum'; 14 elseif strcmp(getmapping_file(L),'affine') 14 elseif strcmp(getmapping_file(L),'affine') | strcmp(getmapping_file(L),'pe_em') 15 15 try 16 L = getdata(L,'e igenvalues');16 L = getdata(L,'eval'); 17 17 tit = 'Eigenvalues'; 18 18 catch
Note: See TracChangeset
for help on using the changeset viewer.